Nov 29 07:00:59 crc systemd[1]: Starting Kubernetes Kubelet... Nov 29 07:00:59 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:00:59 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:01:00 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 29 07:01:01 crc kubenswrapper[4828]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.113874 4828 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118892 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118919 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118925 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118931 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118937 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118942 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118946 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118951 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118956 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118961 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118966 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118970 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118975 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118991 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.118996 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119001 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119005 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119011 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119016 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119022 4828 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119027 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119031 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119036 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119042 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119047 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119051 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119057 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119062 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119067 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119074 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119080 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119085 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119089 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119093 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119098 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119102 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119109 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119116 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119121 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119126 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119131 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119140 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119145 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119150 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119154 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119159 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119163 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119167 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119173 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119180 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119186 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119190 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119195 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119199 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119204 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119208 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119213 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119219 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119224 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119230 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119235 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119240 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119245 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119249 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119254 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119260 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119287 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119292 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119296 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119301 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.119305 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119431 4828 flags.go:64] FLAG: --address="0.0.0.0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119444 4828 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119455 4828 flags.go:64] FLAG: --anonymous-auth="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119462 4828 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119469 4828 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119475 4828 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119483 4828 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119489 4828 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119495 4828 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119500 4828 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119506 4828 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119512 4828 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119517 4828 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119522 4828 flags.go:64] FLAG: --cgroup-root="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119528 4828 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119534 4828 flags.go:64] FLAG: --client-ca-file="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119539 4828 flags.go:64] FLAG: --cloud-config="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119544 4828 flags.go:64] FLAG: --cloud-provider="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119549 4828 flags.go:64] FLAG: --cluster-dns="[]" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119555 4828 flags.go:64] FLAG: --cluster-domain="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119560 4828 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119565 4828 flags.go:64] FLAG: --config-dir="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119570 4828 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119576 4828 flags.go:64] FLAG: --container-log-max-files="5" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119583 4828 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119588 4828 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119593 4828 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119609 4828 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119615 4828 flags.go:64] FLAG: --contention-profiling="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119620 4828 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119626 4828 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119632 4828 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119637 4828 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119645 4828 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119651 4828 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119657 4828 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119663 4828 flags.go:64] FLAG: --enable-load-reader="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119668 4828 flags.go:64] FLAG: --enable-server="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119674 4828 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119685 4828 flags.go:64] FLAG: --event-burst="100" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119693 4828 flags.go:64] FLAG: --event-qps="50" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119698 4828 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119704 4828 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119710 4828 flags.go:64] FLAG: --eviction-hard="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119716 4828 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119722 4828 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119727 4828 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119733 4828 flags.go:64] FLAG: --eviction-soft="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119738 4828 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119743 4828 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119748 4828 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119754 4828 flags.go:64] FLAG: --experimental-mounter-path="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119759 4828 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119764 4828 flags.go:64] FLAG: --fail-swap-on="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119770 4828 flags.go:64] FLAG: --feature-gates="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119795 4828 flags.go:64] FLAG: --file-check-frequency="20s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119801 4828 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119807 4828 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119812 4828 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119818 4828 flags.go:64] FLAG: --healthz-port="10248" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119823 4828 flags.go:64] FLAG: --help="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119828 4828 flags.go:64] FLAG: --hostname-override="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119833 4828 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119855 4828 flags.go:64] FLAG: --http-check-frequency="20s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119860 4828 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119865 4828 flags.go:64] FLAG: --image-credential-provider-config="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119871 4828 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119876 4828 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119882 4828 flags.go:64] FLAG: --image-service-endpoint="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119888 4828 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119894 4828 flags.go:64] FLAG: --kube-api-burst="100" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119900 4828 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119907 4828 flags.go:64] FLAG: --kube-api-qps="50" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119914 4828 flags.go:64] FLAG: --kube-reserved="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119920 4828 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119927 4828 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119934 4828 flags.go:64] FLAG: --kubelet-cgroups="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119940 4828 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119945 4828 flags.go:64] FLAG: --lock-file="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119951 4828 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119958 4828 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119965 4828 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119975 4828 flags.go:64] FLAG: --log-json-split-stream="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119982 4828 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119989 4828 flags.go:64] FLAG: --log-text-split-stream="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.119995 4828 flags.go:64] FLAG: --logging-format="text" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120002 4828 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120009 4828 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120015 4828 flags.go:64] FLAG: --manifest-url="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120021 4828 flags.go:64] FLAG: --manifest-url-header="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120049 4828 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120055 4828 flags.go:64] FLAG: --max-open-files="1000000" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120062 4828 flags.go:64] FLAG: --max-pods="110" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120067 4828 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120073 4828 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120079 4828 flags.go:64] FLAG: --memory-manager-policy="None" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120084 4828 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120089 4828 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120096 4828 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120107 4828 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120130 4828 flags.go:64] FLAG: --node-status-max-images="50" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120135 4828 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120140 4828 flags.go:64] FLAG: --oom-score-adj="-999" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120146 4828 flags.go:64] FLAG: --pod-cidr="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120151 4828 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120161 4828 flags.go:64] FLAG: --pod-manifest-path="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120166 4828 flags.go:64] FLAG: --pod-max-pids="-1" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120171 4828 flags.go:64] FLAG: --pods-per-core="0" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120177 4828 flags.go:64] FLAG: --port="10250" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120182 4828 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120187 4828 flags.go:64] FLAG: --provider-id="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120192 4828 flags.go:64] FLAG: --qos-reserved="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120198 4828 flags.go:64] FLAG: --read-only-port="10255" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120203 4828 flags.go:64] FLAG: --register-node="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120208 4828 flags.go:64] FLAG: --register-schedulable="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120213 4828 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120223 4828 flags.go:64] FLAG: --registry-burst="10" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120228 4828 flags.go:64] FLAG: --registry-qps="5" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120233 4828 flags.go:64] FLAG: --reserved-cpus="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120238 4828 flags.go:64] FLAG: --reserved-memory="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120245 4828 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120250 4828 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120255 4828 flags.go:64] FLAG: --rotate-certificates="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120260 4828 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120280 4828 flags.go:64] FLAG: --runonce="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120285 4828 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120291 4828 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120296 4828 flags.go:64] FLAG: --seccomp-default="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120301 4828 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120307 4828 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120312 4828 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120332 4828 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120337 4828 flags.go:64] FLAG: --storage-driver-password="root" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120342 4828 flags.go:64] FLAG: --storage-driver-secure="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120347 4828 flags.go:64] FLAG: --storage-driver-table="stats" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120353 4828 flags.go:64] FLAG: --storage-driver-user="root" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120359 4828 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120364 4828 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120369 4828 flags.go:64] FLAG: --system-cgroups="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120374 4828 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120384 4828 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120389 4828 flags.go:64] FLAG: --tls-cert-file="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120394 4828 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120404 4828 flags.go:64] FLAG: --tls-min-version="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120409 4828 flags.go:64] FLAG: --tls-private-key-file="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120414 4828 flags.go:64] FLAG: --topology-manager-policy="none" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120419 4828 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120426 4828 flags.go:64] FLAG: --topology-manager-scope="container" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120431 4828 flags.go:64] FLAG: --v="2" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120439 4828 flags.go:64] FLAG: --version="false" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120446 4828 flags.go:64] FLAG: --vmodule="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120453 4828 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.120458 4828 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120620 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120629 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120636 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120641 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120646 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120651 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120655 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120660 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120665 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120669 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120676 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120681 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120685 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120690 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120694 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120699 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120704 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120709 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120715 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120719 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120724 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120728 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120733 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120737 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120742 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120748 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120754 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120759 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120764 4828 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120769 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120773 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120777 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120781 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120786 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120790 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120795 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120800 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120804 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120808 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120813 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120817 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120821 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120828 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120832 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120837 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120843 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120849 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120855 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120859 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120865 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120869 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120874 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120879 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120883 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120890 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120895 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120903 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120907 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120912 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120916 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120921 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120925 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120929 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120935 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120940 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120945 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120950 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120955 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120960 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120964 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.120969 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.121288 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.131547 4828 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.131576 4828 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131646 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131653 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131657 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131661 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131666 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131682 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131686 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131690 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131693 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131698 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131702 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131705 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131709 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131712 4828 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131717 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131720 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131724 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131727 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131730 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131735 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131743 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131747 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131751 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131755 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131759 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131762 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131766 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131770 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131774 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131777 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131781 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131784 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131788 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131791 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131794 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131798 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131802 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131805 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131809 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131813 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131816 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131820 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131823 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131827 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131830 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131834 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131840 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131845 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131850 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131854 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131858 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131863 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131867 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131872 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131877 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131881 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131885 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131889 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131893 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131898 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131902 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131906 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131910 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131914 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131918 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131922 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131925 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131929 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131934 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131938 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.131943 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.131959 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132113 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132126 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132134 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132137 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132141 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132145 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132148 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132152 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132155 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132160 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132163 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132167 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132171 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132174 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132178 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132186 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132189 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132192 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132196 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132199 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132203 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132206 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132210 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132213 4828 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132217 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132220 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132224 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132228 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132231 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132234 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132239 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132242 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132251 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132258 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132282 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132295 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132300 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132303 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132307 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132311 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132315 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132318 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132322 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132325 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132329 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132333 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132338 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132342 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132345 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132349 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132352 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132356 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132359 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132363 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132368 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132373 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132378 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132383 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132387 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132392 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132397 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132402 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132406 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132411 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132425 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132430 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132439 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132449 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132454 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132460 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.132469 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.132477 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.132647 4828 server.go:940] "Client rotation is on, will bootstrap in background" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.137703 4828 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.137949 4828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.139970 4828 server.go:997] "Starting client certificate rotation" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.140061 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.140226 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-07 05:11:29.871235481 +0000 UTC Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.140396 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 934h10m28.730844451s for next certificate rotation Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.149165 4828 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.151196 4828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.162215 4828 log.go:25] "Validated CRI v1 runtime API" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.177740 4828 log.go:25] "Validated CRI v1 image API" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.183000 4828 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.189197 4828 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-29-06-55-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.189241 4828 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.205843 4828 manager.go:217] Machine: {Timestamp:2025-11-29 07:01:01.204134112 +0000 UTC m=+0.826210200 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0abdd982-eeb9-4e63-b4dc-a9e6bc31d088 BootID:f1da721f-d6f2-4e3a-b5e9-e25de0b32409 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:84:59:a2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:84:59:a2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:df:03:e8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4d:13:1c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:98:24:52 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:45:80:8e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:ed:9e:8b:dc:ef Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:df:3e:86:0b:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.207131 4828 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.207535 4828 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.208872 4828 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209157 4828 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209220 4828 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209499 4828 topology_manager.go:138] "Creating topology manager with none policy" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209513 4828 container_manager_linux.go:303] "Creating device plugin manager" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209822 4828 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.209861 4828 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.210193 4828 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.210330 4828 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.211310 4828 kubelet.go:418] "Attempting to sync node with API server" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.211338 4828 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.211375 4828 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.211391 4828 kubelet.go:324] "Adding apiserver pod source" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.211404 4828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.214391 4828 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.285469 4828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.285765 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.285967 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.287011 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.287191 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.288337 4828 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289011 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289038 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289048 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289057 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289069 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289080 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289091 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289102 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289111 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289120 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289134 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289143 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289365 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.289911 4828 server.go:1280] "Started kubelet" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.290123 4828 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.290233 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.290357 4828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 29 07:01:01 crc systemd[1]: Started Kubernetes Kubelet. Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.291241 4828 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.291930 4828 server.go:460] "Adding debug handlers to kubelet server" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.296413 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.296480 4828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.296949 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.297047 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:43:55.653830959 +0000 UTC Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.297144 4828 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.297182 4828 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.297331 4828 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.297871 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.297970 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.298059 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="200ms" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304365 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304418 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304432 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304444 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304456 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304468 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304478 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304489 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304502 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304513 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304525 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304537 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304548 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304562 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304574 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304587 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304598 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304611 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304623 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304633 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304644 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304656 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304668 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304679 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304690 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304700 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304742 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304755 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304769 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304781 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304794 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304806 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304817 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304830 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304841 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304853 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304863 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304873 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304884 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304895 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304906 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304918 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304929 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304940 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304951 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304962 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304974 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304987 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.304997 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305007 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305017 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305028 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305044 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305057 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305069 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305083 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305096 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305110 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305123 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305135 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305147 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305160 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305171 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305184 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305196 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305207 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305218 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305230 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305242 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305253 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305278 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305292 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305303 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305315 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305325 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305336 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305348 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305359 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305370 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305381 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305392 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305403 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305413 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305424 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305435 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305447 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305458 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305469 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305480 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305492 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305502 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305513 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.305525 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.369412 4828 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372022 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372080 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372101 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372121 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372142 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372163 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372183 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372281 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372302 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372378 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372399 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372466 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372490 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372511 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372534 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372550 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372568 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372613 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372631 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372649 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372669 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372691 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372708 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372724 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372743 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372785 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372808 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372830 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372853 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372872 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372889 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372907 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372926 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372941 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372956 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.372997 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373023 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373037 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.371942 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c6824cd006a69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:01:01.289859689 +0000 UTC m=+0.911935757,LastTimestamp:2025-11-29 07:01:01.289859689 +0000 UTC m=+0.911935757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373052 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373130 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373169 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373192 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373209 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373229 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373248 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373301 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373318 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373342 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373359 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373376 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373393 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373410 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373427 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373444 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373461 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373478 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373497 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373515 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373540 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373556 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373573 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373590 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373606 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373712 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373734 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373753 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373768 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373786 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373802 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373819 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373835 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373851 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373866 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373883 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373898 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373914 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373929 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373945 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373960 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373974 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.373988 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374001 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374017 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374032 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374047 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374063 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374078 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374092 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374106 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374119 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374134 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374148 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374161 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374176 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374210 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374223 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374239 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374251 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374287 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374301 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374312 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374324 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374336 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374350 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374373 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374387 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374400 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374415 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374430 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374444 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374460 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374476 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374489 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374504 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374536 4828 reconstruct.go:97] "Volume reconstruction finished" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.374549 4828 reconciler.go:26] "Reconciler: start to sync state" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.379871 4828 factory.go:55] Registering systemd factory Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.379931 4828 factory.go:221] Registration of the systemd container factory successfully Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.380381 4828 factory.go:153] Registering CRI-O factory Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.380435 4828 factory.go:221] Registration of the crio container factory successfully Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.380542 4828 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.380577 4828 factory.go:103] Registering Raw factory Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.380601 4828 manager.go:1196] Started watching for new ooms in manager Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.384200 4828 manager.go:319] Starting recovery of all containers Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.397197 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.401081 4828 manager.go:324] Recovery completed Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.406815 4828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.410093 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.410343 4828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.410423 4828 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.410477 4828 kubelet.go:2335] "Starting kubelet main sync loop" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.410553 4828 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 29 07:01:01 crc kubenswrapper[4828]: W1129 07:01:01.411321 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.411376 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.412504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.412546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.412570 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.414210 4828 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.414233 4828 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.414418 4828 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.497756 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.499592 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="400ms" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.511679 4828 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.597954 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.698431 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.712710 4828 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.798549 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.899848 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:01 crc kubenswrapper[4828]: E1129 07:01:01.900296 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="800ms" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.919768 4828 policy_none.go:49] "None policy: Start" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.920650 4828 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 29 07:01:01 crc kubenswrapper[4828]: I1129 07:01:01.920694 4828 state_mem.go:35] "Initializing new in-memory state store" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.000523 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.101169 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.113877 4828 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.139679 4828 manager.go:334] "Starting Device Plugin manager" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.139755 4828 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.139773 4828 server.go:79] "Starting device plugin registration server" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.140244 4828 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.140286 4828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.140711 4828 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.140827 4828 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.140837 4828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.147448 4828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:01:02 crc kubenswrapper[4828]: W1129 07:01:02.208939 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.209659 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.240599 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.242658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.242858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.242954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.243373 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.244284 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.292089 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.297227 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:58:28.380813999 +0000 UTC Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.297382 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 685h57m26.083435358s for next certificate rotation Nov 29 07:01:02 crc kubenswrapper[4828]: W1129 07:01:02.317218 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.317330 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.444903 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.448820 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.448886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.448900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.448934 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.449571 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: W1129 07:01:02.611146 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.611316 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:02 crc kubenswrapper[4828]: W1129 07:01:02.661251 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.661522 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.701795 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="1.6s" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.850119 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.851312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.851368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.851381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.851412 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: E1129 07:01:02.851900 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.914058 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.914235 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.915822 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.915865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.915878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.916095 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.916396 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.916441 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917299 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917359 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917451 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917709 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.917787 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918388 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918509 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.918545 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919260 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919420 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919397 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919606 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919633 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.919664 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920608 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920622 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920735 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.920767 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.921647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.921676 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:02 crc kubenswrapper[4828]: I1129 07:01:02.921697 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.097939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098001 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098030 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098048 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098064 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098080 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098096 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098335 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098407 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098674 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.098853 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.099030 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.099075 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.099107 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.099137 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200395 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200480 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200512 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200539 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200561 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200583 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200603 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200630 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200705 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200636 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200735 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200776 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200748 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200823 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200787 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200760 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200829 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200970 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200950 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.200996 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201010 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201034 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201054 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201086 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201141 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201176 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201320 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.201311 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.255719 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.276935 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.286074 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.291728 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.309118 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.317576 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:03 crc kubenswrapper[4828]: E1129 07:01:03.491448 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c6824cd006a69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:01:01.289859689 +0000 UTC m=+0.911935757,LastTimestamp:2025-11-29 07:01:01.289859689 +0000 UTC m=+0.911935757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.653009 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.655502 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.655560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.655575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:03 crc kubenswrapper[4828]: I1129 07:01:03.655612 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:03 crc kubenswrapper[4828]: E1129 07:01:03.656384 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:04 crc kubenswrapper[4828]: W1129 07:01:04.019467 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:04 crc kubenswrapper[4828]: E1129 07:01:04.019622 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.291058 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:04 crc kubenswrapper[4828]: E1129 07:01:04.303206 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="3.2s" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.425334 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.425451 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc1d55178830d6888b0bb332ab3a5fb07add42b1816e26cb9330fe2451c7090e"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.425564 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.427017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.427044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.427055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.428778 4828 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f" exitCode=0 Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.428854 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.428878 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"71c0471ffa00957d98f5ff9307c07bb56b72cbe14a6354cc692d8307804c1f19"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.428948 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.429858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.429889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.429899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.431879 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.431908 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7664a7810f55a5c885441775b527f66e89449125ab775552ff0aa1307fafddd"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.431990 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.433367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.433407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.433421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:04 crc kubenswrapper[4828]: W1129 07:01:04.434851 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:04 crc kubenswrapper[4828]: E1129 07:01:04.434917 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.434980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.435008 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"895af8766d7426074a6b1bf17d56a8676289d086aeaa4c06161a249ecef4bede"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.437467 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.437503 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a417d2e17257c2da17b7f86df10498560a4cb59db790b19a2a517707f575d4f6"} Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.437597 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.438328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.438370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:04 crc kubenswrapper[4828]: I1129 07:01:04.438381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:04 crc kubenswrapper[4828]: W1129 07:01:04.705484 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:04 crc kubenswrapper[4828]: E1129 07:01:04.705694 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:05 crc kubenswrapper[4828]: W1129 07:01:05.138471 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:05 crc kubenswrapper[4828]: E1129 07:01:05.139108 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.257391 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.258993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.259038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.259049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.259071 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:05 crc kubenswrapper[4828]: E1129 07:01:05.260167 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.291794 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.544319 4828 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39" exitCode=0 Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.544457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39"} Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.544808 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.546737 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40"} Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.548117 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555" exitCode=0 Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.548210 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555"} Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.548396 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.549362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.549402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.549417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.549936 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9" exitCode=0 Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.550010 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9"} Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.550124 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.550908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.550938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.550951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.551769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.551859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.551876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.552137 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e40c01e6a92dd24a54c2b65fa533a70235f3faca620c24dc218d0a658b523141"} Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.552211 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.553147 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.553228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.553296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.553310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.554355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.554380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:05 crc kubenswrapper[4828]: I1129 07:01:05.554392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.291626 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.592156 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.592206 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.613995 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.614042 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.614137 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.615436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.615460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.615471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.623790 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0" exitCode=0 Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.623838 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.623913 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.624682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.624701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.624708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.641938 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5"} Nov 29 07:01:06 crc kubenswrapper[4828]: I1129 07:01:06.642003 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3"} Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.291403 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:07 crc kubenswrapper[4828]: W1129 07:01:07.366192 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:07 crc kubenswrapper[4828]: E1129 07:01:07.366371 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:07 crc kubenswrapper[4828]: E1129 07:01:07.506005 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="6.4s" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.650737 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c"} Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.650866 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.652090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.652137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.652152 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.654220 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064"} Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.654160 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064" exitCode=0 Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.654403 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.655875 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.655917 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.655930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658061 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad"} Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658100 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658122 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20"} Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658829 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:07 crc kubenswrapper[4828]: I1129 07:01:07.658839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.291974 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:08 crc kubenswrapper[4828]: W1129 07:01:08.370144 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.96:6443: connect: connection refused Nov 29 07:01:08 crc kubenswrapper[4828]: E1129 07:01:08.370339 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.96:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.460322 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.461284 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.461328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.461350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.461377 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:08 crc kubenswrapper[4828]: E1129 07:01:08.461815 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.96:6443: connect: connection refused" node="crc" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.667201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3"} Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.670859 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0"} Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.670884 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.670904 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.670912 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671833 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671847 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.671863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.967223 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.967453 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.968710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.968746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:08 crc kubenswrapper[4828]: I1129 07:01:08.968757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.676987 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32"} Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.677099 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.677167 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.677969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.678005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:09 crc kubenswrapper[4828]: I1129 07:01:09.678016 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:10 crc kubenswrapper[4828]: I1129 07:01:10.679392 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:10 crc kubenswrapper[4828]: I1129 07:01:10.680651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:10 crc kubenswrapper[4828]: I1129 07:01:10.680690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:10 crc kubenswrapper[4828]: I1129 07:01:10.680701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:11 crc kubenswrapper[4828]: I1129 07:01:11.685467 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb"} Nov 29 07:01:11 crc kubenswrapper[4828]: I1129 07:01:11.967799 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:01:11 crc kubenswrapper[4828]: I1129 07:01:11.967923 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.081581 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.081855 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.083338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.083384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.083402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:12 crc kubenswrapper[4828]: E1129 07:01:12.147542 4828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.538008 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.538364 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.540097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.540144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.540154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.647528 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.647860 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.649672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.649720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.649733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.694436 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e"} Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.694510 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5"} Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.694584 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.695516 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.695572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.695588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.847320 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.847582 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.848811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.848857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:12 crc kubenswrapper[4828]: I1129 07:01:12.848870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.154359 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.154577 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.156329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.156377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.156386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.160003 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.696659 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.696767 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.697895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.697932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.697943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.697937 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.698072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:13 crc kubenswrapper[4828]: I1129 07:01:13.698091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.018019 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.328925 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.699107 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.699111 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.701123 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.701173 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.701183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.701996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.702022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.702036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.705029 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.862869 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.864213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.864243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.864252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:14 crc kubenswrapper[4828]: I1129 07:01:14.864289 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:15 crc kubenswrapper[4828]: I1129 07:01:15.700909 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:15 crc kubenswrapper[4828]: I1129 07:01:15.701811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:15 crc kubenswrapper[4828]: I1129 07:01:15.701837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:15 crc kubenswrapper[4828]: I1129 07:01:15.701844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:17 crc kubenswrapper[4828]: I1129 07:01:17.277612 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 29 07:01:17 crc kubenswrapper[4828]: I1129 07:01:17.278235 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:17 crc kubenswrapper[4828]: I1129 07:01:17.280048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:17 crc kubenswrapper[4828]: I1129 07:01:17.280119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:17 crc kubenswrapper[4828]: I1129 07:01:17.280134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.292715 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:01:19 crc kubenswrapper[4828]: W1129 07:01:19.398116 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.398570 4828 trace.go:236] Trace[566841199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:01:09.397) (total time: 10001ms): Nov 29 07:01:19 crc kubenswrapper[4828]: Trace[566841199]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:01:19.398) Nov 29 07:01:19 crc kubenswrapper[4828]: Trace[566841199]: [10.001294601s] [10.001294601s] END Nov 29 07:01:19 crc kubenswrapper[4828]: E1129 07:01:19.398860 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.719773 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.723012 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0"} Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.722962 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0" exitCode=255 Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.723225 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.724411 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.724540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.724647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.725907 4828 scope.go:117] "RemoveContainer" containerID="36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0" Nov 29 07:01:19 crc kubenswrapper[4828]: I1129 07:01:19.787450 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.142007 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.142108 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.146664 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.146742 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.729062 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.731707 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe"} Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.731880 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.732977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.733013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:20 crc kubenswrapper[4828]: I1129 07:01:20.733026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.735546 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.735759 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.736673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.736706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.736742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.967432 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:01:21 crc kubenswrapper[4828]: I1129 07:01:21.967528 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.090920 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:22 crc kubenswrapper[4828]: E1129 07:01:22.147797 4828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.738348 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.739701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.739736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.739747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:22 crc kubenswrapper[4828]: I1129 07:01:22.743659 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:23 crc kubenswrapper[4828]: I1129 07:01:23.741087 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:23 crc kubenswrapper[4828]: I1129 07:01:23.742140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:23 crc kubenswrapper[4828]: I1129 07:01:23.742180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:23 crc kubenswrapper[4828]: I1129 07:01:23.742190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:24 crc kubenswrapper[4828]: I1129 07:01:24.744628 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:24 crc kubenswrapper[4828]: I1129 07:01:24.745954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:24 crc kubenswrapper[4828]: I1129 07:01:24.746114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:24 crc kubenswrapper[4828]: I1129 07:01:24.746218 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.143577 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.147631 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.148217 4828 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.150095 4828 trace.go:236] Trace[63039483]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:01:10.402) (total time: 14747ms): Nov 29 07:01:25 crc kubenswrapper[4828]: Trace[63039483]: ---"Objects listed" error: 14747ms (07:01:25.150) Nov 29 07:01:25 crc kubenswrapper[4828]: Trace[63039483]: [14.74730227s] [14.74730227s] END Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.150113 4828 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.153094 4828 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.160124 4828 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.288122 4828 apiserver.go:52] "Watching apiserver" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.292335 4828 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.292864 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.293349 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.294072 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.294136 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.294968 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.294827 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.295024 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.294908 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.294201 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.294875 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.297786 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.298808 4828 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.301348 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.302291 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.303592 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.303600 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.303735 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.305264 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.305590 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.311157 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.342854 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.360707 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.360889 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.360965 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.360989 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361011 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361047 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361069 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361092 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361113 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361136 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361165 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361186 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361206 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361234 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361258 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361318 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361378 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361403 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361433 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361459 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361483 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361520 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361551 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361574 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361595 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361615 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361637 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361661 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361683 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361704 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361726 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361747 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361766 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361785 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361806 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361839 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361866 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361893 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361921 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361940 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.361968 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362083 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362119 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362139 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362157 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362178 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362201 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362223 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362246 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362289 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362311 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362356 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362386 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362406 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362424 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362444 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362463 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362483 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362504 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362525 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362549 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362574 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362598 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362703 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362725 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362744 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362762 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362785 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362803 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362827 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362858 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362877 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362920 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362940 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362911 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362964 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.362990 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363014 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363040 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363066 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363092 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363115 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363140 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363166 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363192 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363216 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363236 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363289 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363323 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363348 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363369 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363394 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363417 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363440 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363463 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363487 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363511 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363534 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363556 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363579 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363621 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363643 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363667 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363694 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363715 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363739 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363762 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363785 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363810 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363832 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363856 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363878 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363901 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363922 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363945 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363969 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363992 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364014 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364036 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364061 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364081 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364104 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364128 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364151 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364173 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364195 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364220 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364243 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364371 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364395 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364423 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364623 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364656 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364683 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364709 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364734 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364761 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364790 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364814 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364837 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364862 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364891 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364917 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364943 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364968 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364991 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365015 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363356 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363429 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363579 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363748 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363784 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365106 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363794 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363888 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.363964 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365140 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364021 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364125 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364202 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364224 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364431 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364695 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364708 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364794 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364889 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365025 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.364166 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365382 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365438 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365515 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365576 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365657 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365685 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.367720 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.368096 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.368144 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.368485 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.368821 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.368885 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369128 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369203 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369403 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369889 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.369757 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370309 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370414 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370581 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370833 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370840 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370877 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.370885 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.371122 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.371489 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.371532 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.371651 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.371952 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.372097 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.372420 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.372468 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.372641 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.372955 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373044 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373098 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373231 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373301 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373575 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373609 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373718 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373853 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.373926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374065 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.365040 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374599 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374654 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374854 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374882 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374953 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375162 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375181 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.374693 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375541 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375639 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375820 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375522 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375692 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375706 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375831 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.381359 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.381672 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.381861 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.381991 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.382241 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.382588 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.382629 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.382666 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.382900 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:25.882854064 +0000 UTC m=+25.504930122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.383198 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.383466 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.383687 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.384067 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.384649 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.384946 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.385585 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.386115 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.386120 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.386335 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.386675 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.386898 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.387042 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.387067 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.387580 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.388304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.389758 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.389985 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.389544 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.390524 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.390536 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.390840 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.375879 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392354 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392366 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392442 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392471 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392500 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392545 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392582 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392712 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392736 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392761 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392788 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392816 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392840 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392863 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392886 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392909 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392938 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392965 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.392988 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393032 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393055 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393080 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393132 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393155 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393176 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393212 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393235 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393282 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393309 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393331 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393351 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393374 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393396 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393418 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393440 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393462 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393484 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393508 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393585 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393617 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393640 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393690 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393725 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393749 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393778 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393801 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393826 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393851 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393873 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.393896 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394209 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394362 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394381 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394394 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394408 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394421 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394435 4828 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394448 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394481 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394496 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394510 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394521 4828 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394533 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394545 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394557 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394569 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394580 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394591 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394602 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394613 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394623 4828 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394634 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394644 4828 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394654 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394665 4828 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394675 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394685 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394696 4828 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394707 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394729 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394740 4828 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394750 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394762 4828 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394773 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394783 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394821 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394831 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394842 4828 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394853 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394864 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394875 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394886 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394897 4828 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394908 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394918 4828 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394937 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394949 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394961 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394972 4828 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394983 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394994 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395005 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395016 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395027 4828 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395038 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395049 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395060 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395072 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395082 4828 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395092 4828 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395113 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395128 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395154 4828 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395167 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395179 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395190 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395201 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395211 4828 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395224 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395245 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395256 4828 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395280 4828 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395291 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395302 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395312 4828 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395324 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395334 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395346 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395357 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395368 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395380 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395391 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395401 4828 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395412 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395422 4828 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395434 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395445 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395456 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395466 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395478 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395488 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395498 4828 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395508 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395524 4828 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395544 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395554 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395564 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395574 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395585 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395596 4828 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395607 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395617 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395627 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395638 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395647 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395658 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395669 4828 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395679 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394484 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.394906 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395250 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395367 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395714 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395955 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.395994 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396108 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396206 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396251 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396748 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396784 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.396959 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.398205 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:25.898182129 +0000 UTC m=+25.520258267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.396822 4828 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.398303 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.398313 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.398511 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.398784 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.399052 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.399146 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397063 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397152 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397408 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397433 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397699 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.401563 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397819 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.401693 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.397038 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.401887 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.402047 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.402619 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.402696 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.402959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.403094 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.402749 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.403447 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.403953 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.404103 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.404174 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:25.904149488 +0000 UTC m=+25.526225636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.405959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.408675 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.409520 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.410042 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.410355 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.412644 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.412672 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.412689 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.414033 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:25.914001196 +0000 UTC m=+25.536077404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.414334 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.414402 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.414810 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.414612 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.414911 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.416663 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.417206 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.417246 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.417286 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.417382 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:25.917341314 +0000 UTC m=+25.539417392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.418579 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.425411 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.425488 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.425963 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.426075 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.441624 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.442000 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.447306 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.447715 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.448768 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.449714 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.450550 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.452237 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.453380 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.456866 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.458723 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.459427 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.459763 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.460631 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.461040 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.461488 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.462034 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.462672 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.462720 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.465820 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.472193 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.475089 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.475113 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.479574 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.480814 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.481461 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.481601 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.481597 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.482007 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.482933 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.484810 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.485536 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.485653 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.485688 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.486765 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.487584 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.488721 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.489204 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.491393 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.492210 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.495219 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.495686 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496206 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496353 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496436 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496455 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496471 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496484 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496495 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496508 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496522 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496534 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496545 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496556 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496567 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496609 4828 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496608 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496622 4828 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496667 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496678 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496695 4828 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496711 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496727 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496741 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496756 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496768 4828 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496781 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496794 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496806 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496818 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496831 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496843 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496854 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496868 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496883 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496896 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496909 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496920 4828 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496934 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496947 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496959 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496971 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496984 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.496997 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497010 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497023 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497036 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497062 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497077 4828 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497095 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497108 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497120 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497132 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497146 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497161 4828 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497174 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497186 4828 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497198 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497211 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497222 4828 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497235 4828 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497246 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497258 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497287 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497300 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497311 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497324 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497338 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497355 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497368 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497380 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497392 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497404 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497417 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497430 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497441 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497453 4828 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497466 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497477 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497489 4828 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497502 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497526 4828 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.497539 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.501408 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.502240 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.507171 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508382 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508382 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508479 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508497 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508740 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.508998 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.515965 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.517254 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.518389 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.518966 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.520701 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.521549 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.522669 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.523457 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.524635 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.524664 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.525249 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.525812 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.526753 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.536226 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.536823 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.538076 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.538849 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.540305 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.540821 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.541953 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.542479 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.544647 4828 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.544856 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.550654 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.551305 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.553606 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.554250 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.556960 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.558345 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.559883 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.562592 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.563610 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.565991 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.566916 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.568246 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.569047 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.570180 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.570895 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.572079 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.573178 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.575122 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.575676 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.576729 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.577563 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.578697 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.579870 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600108 4828 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600151 4828 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600162 4828 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600173 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600184 4828 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600194 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600204 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600215 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600226 4828 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.600238 4828 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.611980 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.660729 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.661077 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.750503 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3dfa885b3aa201ea3c853a606f6861806ffbca48a97115ea1208f5cb8cff8a67"} Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.752777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c55e1cb29fc6586ff7a4823df0f4a191146a913a22eefe3781304cac4c784bd0"} Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.762673 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"df5248eca6b1391fb622c99387d941a715bbf05575577b9f57896e49fff59148"} Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.905016 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.905096 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.905189 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:26.905158737 +0000 UTC m=+26.527234795 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.905227 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: I1129 07:01:25.905251 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.905309 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:26.90529165 +0000 UTC m=+26.527367788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.905366 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:25 crc kubenswrapper[4828]: E1129 07:01:25.905401 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:26.905394643 +0000 UTC m=+26.527470701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.006644 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.006698 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.006848 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.006869 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.006882 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.006939 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:27.006922607 +0000 UTC m=+26.628998665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.007017 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.007030 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.007047 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.007078 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:27.007068081 +0000 UTC m=+26.629144139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.769996 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.770913 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.773078 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" exitCode=255 Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.773166 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe"} Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.773467 4828 scope.go:117] "RemoveContainer" containerID="36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.775242 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0"} Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.775309 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012"} Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.781235 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856"} Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.795137 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.809489 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.820519 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.820669 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.821072 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.824066 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.840379 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.851787 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.863662 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.882262 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.898516 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.915139 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.915210 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.915259 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.915385 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.915420 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:28.915385357 +0000 UTC m=+28.537461415 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.915419 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.915469 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:28.915455778 +0000 UTC m=+28.537531936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: E1129 07:01:26.915522 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:28.91550918 +0000 UTC m=+28.537585238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.917114 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.937829 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.964029 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dgclj"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.964497 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.964705 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-49f6l"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.965739 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.966230 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qfj9g"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.966707 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qfj9g" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.967774 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ghlnj"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.967820 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.970189 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.973519 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.977607 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.977762 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.977875 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.979430 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.979559 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.980585 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.980585 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.980740 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.980750 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.980938 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:19Z\\\",\\\"message\\\":\\\"W1129 07:01:08.058706 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1129 07:01:08.059333 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764399668 cert, and key in /tmp/serving-cert-3786425716/serving-signer.crt, /tmp/serving-cert-3786425716/serving-signer.key\\\\nI1129 07:01:08.526085 1 observer_polling.go:159] Starting file observer\\\\nW1129 07:01:08.528027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1129 07:01:08.528188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:08.529302 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3786425716/tls.crt::/tmp/serving-cert-3786425716/tls.key\\\\\\\"\\\\nF1129 07:01:19.415577 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.981408 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.981419 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.981435 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.981472 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.981538 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.982146 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.982370 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.982685 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.994553 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p6rzz"] Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.994993 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:26 crc kubenswrapper[4828]: I1129 07:01:26.999928 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.000532 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.001515 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016679 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016732 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w652b\" (UniqueName: \"kubernetes.io/projected/ce72f1df-15a3-475b-918b-9076a0d9c29c-kube-api-access-w652b\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016761 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016784 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016806 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016825 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-etc-kubernetes\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016848 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-binary-copy\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016869 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.016967 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017044 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017098 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017129 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-os-release\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017178 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017240 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017367 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017392 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017407 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017470 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:29.017451634 +0000 UTC m=+28.639527762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce72f1df-15a3-475b-918b-9076a0d9c29c-proxy-tls\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017491 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017529 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017539 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017581 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.017628 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:29.017616778 +0000 UTC m=+28.639692956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017655 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017683 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017710 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rk2h\" (UniqueName: \"kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017737 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cni-binary-copy\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017759 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-os-release\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017780 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6v5g\" (UniqueName: \"kubernetes.io/projected/c5836996-65fd-4b24-b757-269259483919-kube-api-access-p6v5g\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017838 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017874 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-socket-dir-parent\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017914 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017937 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cnibin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.017979 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-multus-certs\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018003 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qz9s\" (UniqueName: \"kubernetes.io/projected/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-kube-api-access-9qz9s\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018037 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018113 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018143 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018169 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-system-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018195 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-kubelet\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018215 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-hosts-file\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018255 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018297 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018316 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018340 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-k8s-cni-cncf-io\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018383 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-netns\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018420 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-multus\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018449 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-conf-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018467 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-system-cni-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018523 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018547 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce72f1df-15a3-475b-918b-9076a0d9c29c-rootfs\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018566 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-hostroot\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018591 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce72f1df-15a3-475b-918b-9076a0d9c29c-mcd-auth-proxy-config\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018611 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-cnibin\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018646 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-bin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018677 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-daemon-config\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.018697 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvb9x\" (UniqueName: \"kubernetes.io/projected/b3a37050-181c-42b4-acf9-dc458a0f5bcf-kube-api-access-kvb9x\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.021583 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.040666 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.062370 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.079856 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.105885 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:19Z\\\",\\\"message\\\":\\\"W1129 07:01:08.058706 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1129 07:01:08.059333 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764399668 cert, and key in /tmp/serving-cert-3786425716/serving-signer.crt, /tmp/serving-cert-3786425716/serving-signer.key\\\\nI1129 07:01:08.526085 1 observer_polling.go:159] Starting file observer\\\\nW1129 07:01:08.528027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1129 07:01:08.528188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:08.529302 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3786425716/tls.crt::/tmp/serving-cert-3786425716/tls.key\\\\\\\"\\\\nF1129 07:01:19.415577 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119790 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119847 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w652b\" (UniqueName: \"kubernetes.io/projected/ce72f1df-15a3-475b-918b-9076a0d9c29c-kube-api-access-w652b\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119870 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119890 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119907 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-etc-kubernetes\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119925 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-binary-copy\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119944 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119961 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119980 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.119999 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120019 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120036 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-os-release\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120056 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120094 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120112 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120130 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120149 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rk2h\" (UniqueName: \"kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce72f1df-15a3-475b-918b-9076a0d9c29c-proxy-tls\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120189 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6v5g\" (UniqueName: \"kubernetes.io/projected/c5836996-65fd-4b24-b757-269259483919-kube-api-access-p6v5g\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120206 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cni-binary-copy\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-os-release\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120240 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120256 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-socket-dir-parent\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120300 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cnibin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120321 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-multus-certs\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120339 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qz9s\" (UniqueName: \"kubernetes.io/projected/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-kube-api-access-9qz9s\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120367 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120384 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120402 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120419 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-system-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120437 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120457 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-kubelet\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120475 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-hosts-file\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120495 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120515 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120550 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-k8s-cni-cncf-io\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120583 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120602 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-netns\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120620 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-multus\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120640 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-system-cni-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120659 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-conf-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120678 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce72f1df-15a3-475b-918b-9076a0d9c29c-rootfs\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120730 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-hostroot\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120753 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce72f1df-15a3-475b-918b-9076a0d9c29c-mcd-auth-proxy-config\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120770 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-cnibin\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120789 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-daemon-config\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120807 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvb9x\" (UniqueName: \"kubernetes.io/projected/b3a37050-181c-42b4-acf9-dc458a0f5bcf-kube-api-access-kvb9x\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120827 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-bin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120933 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-bin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.120987 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122102 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-etc-kubernetes\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5836996-65fd-4b24-b757-269259483919-cni-binary-copy\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122727 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122757 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122785 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122812 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.122838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123212 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-os-release\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123296 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123596 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-kubelet\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123670 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-hosts-file\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-system-cni-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123652 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-k8s-cni-cncf-io\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123826 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cnibin\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123886 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-multus-certs\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123900 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123936 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-hostroot\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123996 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-var-lib-cni-multus\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123939 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124006 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-system-cni-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123962 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-host-run-netns\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.123605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124061 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124069 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-cnibin\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124109 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124141 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce72f1df-15a3-475b-918b-9076a0d9c29c-rootfs\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124147 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124162 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124404 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-conf-dir\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124443 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-cni-binary-copy\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124542 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124844 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-socket-dir-parent\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.124915 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3a37050-181c-42b4-acf9-dc458a0f5bcf-multus-daemon-config\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.125096 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.125462 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce72f1df-15a3-475b-918b-9076a0d9c29c-mcd-auth-proxy-config\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.125627 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5836996-65fd-4b24-b757-269259483919-os-release\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.125802 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.127071 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.131553 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.141566 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce72f1df-15a3-475b-918b-9076a0d9c29c-proxy-tls\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.156964 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6v5g\" (UniqueName: \"kubernetes.io/projected/c5836996-65fd-4b24-b757-269259483919-kube-api-access-p6v5g\") pod \"multus-additional-cni-plugins-ghlnj\" (UID: \"c5836996-65fd-4b24-b757-269259483919\") " pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.159525 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w652b\" (UniqueName: \"kubernetes.io/projected/ce72f1df-15a3-475b-918b-9076a0d9c29c-kube-api-access-w652b\") pod \"machine-config-daemon-dgclj\" (UID: \"ce72f1df-15a3-475b-918b-9076a0d9c29c\") " pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.160035 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvb9x\" (UniqueName: \"kubernetes.io/projected/b3a37050-181c-42b4-acf9-dc458a0f5bcf-kube-api-access-kvb9x\") pod \"multus-qfj9g\" (UID: \"b3a37050-181c-42b4-acf9-dc458a0f5bcf\") " pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.162748 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rk2h\" (UniqueName: \"kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h\") pod \"ovnkube-node-49f6l\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.202721 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.245776 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qz9s\" (UniqueName: \"kubernetes.io/projected/e6388d13-a6fa-4313-b6ee-7ac3e47bc893-kube-api-access-9qz9s\") pod \"node-resolver-p6rzz\" (UID: \"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\") " pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.271642 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.290526 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.296209 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.299326 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.311252 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qfj9g" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.311685 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.318059 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.329202 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p6rzz" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.334610 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.339973 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.341834 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.352457 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.366955 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.414571 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.414725 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.414812 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.414873 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.414920 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.414974 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.423229 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.442829 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.443940 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.463353 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.495436 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.538299 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.559721 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.579424 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.597729 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.620432 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.641287 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.659771 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.676289 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.769789 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.788009 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36df4952c96ccc3544233fffbf06e0da67b66816ba5d9549374fd11dff78acd0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:19Z\\\",\\\"message\\\":\\\"W1129 07:01:08.058706 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1129 07:01:08.059333 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764399668 cert, and key in /tmp/serving-cert-3786425716/serving-signer.crt, /tmp/serving-cert-3786425716/serving-signer.key\\\\nI1129 07:01:08.526085 1 observer_polling.go:159] Starting file observer\\\\nW1129 07:01:08.528027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1129 07:01:08.528188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:08.529302 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3786425716/tls.crt::/tmp/serving-cert-3786425716/tls.key\\\\\\\"\\\\nF1129 07:01:19.415577 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.788375 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.788474 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"ab0207fd56e1047f06d3077027fc3e59a8a37eb85ff63480eb44d6408bfa4002"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.796531 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.798714 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:01:27 crc kubenswrapper[4828]: E1129 07:01:27.798892 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.799876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerStarted","Data":"685beb48a43683cd27c392c16b54ddb6df1f75bdded0b5d174c4adc5a88b60c3"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.804589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerStarted","Data":"77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.804661 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerStarted","Data":"82eeb6e414c7c3dad3ec4368b4752d1d69292aa7e16c1eceb2b841879e819029"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.807127 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p6rzz" event={"ID":"e6388d13-a6fa-4313-b6ee-7ac3e47bc893","Type":"ContainerStarted","Data":"c3dccc03746b1b93c8d4f8074be6c1265e6d3ea68d2020e626e55f533f7fb9a4"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.810566 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.812846 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" exitCode=0 Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.814066 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.814447 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"e45c516e2b97514c9623ddfea8e7dd6e12e280e2e55e1d07dd88fdf4101cefc3"} Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.825961 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.845548 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.869143 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:27 crc kubenswrapper[4828]: I1129 07:01:27.991652 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:27Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.059076 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.089497 4828 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.143359 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.165683 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.184927 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.445641 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.497816 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.532580 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.732291 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.793937 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.821028 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.841681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p6rzz" event={"ID":"e6388d13-a6fa-4313-b6ee-7ac3e47bc893","Type":"ContainerStarted","Data":"484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa"} Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.851077 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.851130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.854606 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e" exitCode=0 Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.854715 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e"} Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.861520 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b"} Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.864941 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.884837 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.972022 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.972260 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:28 crc kubenswrapper[4828]: I1129 07:01:28.972389 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:28 crc kubenswrapper[4828]: E1129 07:01:28.972757 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:28 crc kubenswrapper[4828]: E1129 07:01:28.972951 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:32.972894094 +0000 UTC m=+32.594970152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:28 crc kubenswrapper[4828]: E1129 07:01:28.972977 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:32.972967986 +0000 UTC m=+32.595044044 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:28 crc kubenswrapper[4828]: E1129 07:01:28.972994 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:28 crc kubenswrapper[4828]: E1129 07:01:28.973043 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:32.973023047 +0000 UTC m=+32.595099095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.000629 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.010934 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.029801 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.031100 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.031260 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.063440 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.077128 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.077199 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.079355 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.079411 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.079436 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.079509 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:33.079484866 +0000 UTC m=+32.701560984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.080293 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.080315 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.080326 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.080362 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:33.080350976 +0000 UTC m=+32.702427114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.093213 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.111669 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.140165 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.287745 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.311906 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.331301 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.348346 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.370582 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.386803 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.400904 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.411526 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.411682 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.411854 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.412246 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.412375 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.412462 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.431695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.450651 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.463721 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.477456 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.493113 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.513523 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.528610 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.551161 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.567137 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.581159 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.596202 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.619318 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.787662 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.788773 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:01:29 crc kubenswrapper[4828]: E1129 07:01:29.789043 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.913304 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.913359 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.913372 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.913384 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.916936 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerStarted","Data":"bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507"} Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.937191 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.950960 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:29 crc kubenswrapper[4828]: I1129 07:01:29.971682 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.007126 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.028467 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.046721 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.069778 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.096628 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.114642 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.130254 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.146292 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.184007 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.198518 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.215512 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.338484 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-26zg8"] Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.339502 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.343078 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.343163 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.343083 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.343440 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.364226 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.384707 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.402163 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.410142 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-serviceca\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.410231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgvc\" (UniqueName: \"kubernetes.io/projected/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-kube-api-access-smgvc\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.410294 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-host\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.420246 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.436590 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.455889 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.474170 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.487741 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.511806 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-serviceca\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.511875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgvc\" (UniqueName: \"kubernetes.io/projected/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-kube-api-access-smgvc\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.511911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-host\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.512103 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-host\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.513760 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-serviceca\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.515825 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.536209 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.542433 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgvc\" (UniqueName: \"kubernetes.io/projected/8b3bb3f6-5c62-4db9-a1d3-0fd476518332-kube-api-access-smgvc\") pod \"node-ca-26zg8\" (UID: \"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\") " pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.570187 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.587652 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.604769 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.618864 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.637010 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.763807 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26zg8" Nov 29 07:01:30 crc kubenswrapper[4828]: W1129 07:01:30.786546 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b3bb3f6_5c62_4db9_a1d3_0fd476518332.slice/crio-b665b201e3be39bcfed775519394b0d4e5ab9cff5705361b2a474a2498391265 WatchSource:0}: Error finding container b665b201e3be39bcfed775519394b0d4e5ab9cff5705361b2a474a2498391265: Status 404 returned error can't find the container with id b665b201e3be39bcfed775519394b0d4e5ab9cff5705361b2a474a2498391265 Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.926117 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160"} Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.935655 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26zg8" event={"ID":"8b3bb3f6-5c62-4db9-a1d3-0fd476518332","Type":"ContainerStarted","Data":"b665b201e3be39bcfed775519394b0d4e5ab9cff5705361b2a474a2498391265"} Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.939120 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507" exitCode=0 Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.939165 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507"} Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.945626 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.964958 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:30 crc kubenswrapper[4828]: I1129 07:01:30.983070 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.000693 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.015775 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.055545 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.070355 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.085187 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.104282 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.123335 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.137412 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.156996 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.172416 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.193030 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.207739 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.222043 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.233372 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.255782 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.278263 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.293137 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.308822 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.322227 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.336092 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.350829 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.364880 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.381032 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.396031 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.408846 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.411835 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.411856 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:31 crc kubenswrapper[4828]: E1129 07:01:31.411972 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.412023 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:31 crc kubenswrapper[4828]: E1129 07:01:31.412133 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:31 crc kubenswrapper[4828]: E1129 07:01:31.412207 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.424746 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.437765 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.455030 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.467842 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.478380 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.495536 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.509296 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.536562 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.553350 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.568638 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.582011 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.594980 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.609443 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.621716 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.643088 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.656649 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.670954 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.949664 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.952698 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26zg8" event={"ID":"8b3bb3f6-5c62-4db9-a1d3-0fd476518332","Type":"ContainerStarted","Data":"ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4"} Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.956242 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef" exitCode=0 Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.956313 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef"} Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.974647 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:31 crc kubenswrapper[4828]: I1129 07:01:31.993186 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.009777 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.026213 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.039940 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.056137 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.072060 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.095616 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.113913 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.131167 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.145024 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.148160 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.155116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.155695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.155709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.155821 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.157040 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.163599 4828 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.170582 4828 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.172668 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.177353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.177396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.177404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.177417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.177441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.186259 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.196872 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.201763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.201803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.201813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.201829 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.201838 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.215975 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.228417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.228471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.228485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.228504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.228519 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.233997 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.248046 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.265332 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.265403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.265416 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.265433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.265445 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.295517 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.303386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.303709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.303804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.303589 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.303901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.304054 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.326491 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.328837 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: E1129 07:01:32.328994 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.331535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.331590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.331603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.331625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.331640 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.345464 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.388638 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.426156 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.435152 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.435207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.435221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.435241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.435254 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.473283 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.507916 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.537985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.538262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.538373 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.538491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.538611 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.545765 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.585161 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.623233 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.648446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.648531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.648551 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.648577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.648594 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.667981 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.705741 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751965 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751889 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.751996 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.786251 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.825028 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.854216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.854276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.854288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.854316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.854329 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.957407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.957452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.957463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.957480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:32 crc kubenswrapper[4828]: I1129 07:01:32.957491 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:32Z","lastTransitionTime":"2025-11-29T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.052106 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045" exitCode=0 Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.052184 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.052326 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.052405 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.052511 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.052572 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:41.052554438 +0000 UTC m=+40.674630496 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.052610 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:41.052582698 +0000 UTC m=+40.674658776 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.052628 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045"} Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.052772 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.052878 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:41.052852095 +0000 UTC m=+40.674928233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.059493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.059525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.059536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.059551 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.059563 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.066601 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.083290 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.097482 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.108246 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.121896 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.131798 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.146283 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.153361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.153426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153559 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153607 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153623 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153682 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153723 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153738 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153697 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:41.153673515 +0000 UTC m=+40.775749663 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.153833 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:41.153813108 +0000 UTC m=+40.775889226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.161810 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.162000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.162030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.162041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.162058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.162069 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.188219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.222679 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.271592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.271889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.272035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.272197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.272338 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.272522 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.305056 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.345013 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.376096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.376143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.376155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.376171 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.376182 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.383361 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.411612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.411677 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.411758 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.411812 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.411612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:33 crc kubenswrapper[4828]: E1129 07:01:33.411909 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.429623 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.478814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.478857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.478866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.478883 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.478893 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.581547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.581611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.581626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.581645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.581658 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.684131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.684555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.684701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.684795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.684894 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.788204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.788252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.788263 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.788295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.788306 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.891617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.891661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.891670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.891695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.891706 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.994945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.995105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.995114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.995129 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:33 crc kubenswrapper[4828]: I1129 07:01:33.995137 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:33Z","lastTransitionTime":"2025-11-29T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.062042 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerStarted","Data":"abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.076501 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.092441 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.097716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.097783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.097794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.097808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.097817 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.107474 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.118952 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.131050 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.143605 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.157883 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.171616 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.191941 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.200292 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.200329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.200339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.200355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.200365 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.211695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.226293 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.238942 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.253616 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.276459 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.293629 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.303725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.303776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.303792 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.303816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.303830 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.407429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.407473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.407482 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.407499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.407509 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.511014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.511060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.511072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.511089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.511107 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.617954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.617999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.618010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.618026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.618038 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.724619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.724696 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.724709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.724738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.724762 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.827465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.827502 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.827512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.827528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.827540 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.930813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.930841 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.930854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.930874 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:34 crc kubenswrapper[4828]: I1129 07:01:34.930883 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:34Z","lastTransitionTime":"2025-11-29T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.033699 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.033741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.033751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.033766 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.033776 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.075185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.075424 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.075451 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.075464 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.098891 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.110197 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.117783 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.119956 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.134988 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.136202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.136245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.136255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.136307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.136336 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.156059 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.168285 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.178110 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.196356 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.210558 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.231496 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.238387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.238453 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.238465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.238483 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.238497 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.256740 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.271254 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.281124 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.300656 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.320039 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.333380 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.341364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.341418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.341429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.341455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.341483 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.348433 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.361253 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.382694 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.399120 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.414481 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.414508 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:35 crc kubenswrapper[4828]: E1129 07:01:35.414666 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.414651 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.414950 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:35 crc kubenswrapper[4828]: E1129 07:01:35.414956 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:35 crc kubenswrapper[4828]: E1129 07:01:35.415232 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.440562 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.444115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.444162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.444174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.444191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.444202 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.467774 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.482217 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.497628 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.517071 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.533495 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.546764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.546821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.546833 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.546850 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.546861 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.549413 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.563652 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.576384 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.585467 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.650584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.650624 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.650633 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.650648 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.650658 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.753085 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.753478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.753495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.753520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.753534 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.856720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.856951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.857023 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.857095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.857164 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.959993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.960505 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.960611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.960779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:35 crc kubenswrapper[4828]: I1129 07:01:35.960879 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:35Z","lastTransitionTime":"2025-11-29T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.065243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.065295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.065307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.065325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.065339 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.078537 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581" exitCode=0 Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.079465 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.092973 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.109124 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.124454 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.140051 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.154883 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.168810 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.168845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.168854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.168868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.168878 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.169471 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.188592 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.203666 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.228831 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.254925 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.270870 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.272057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.272079 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.272086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.272100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.272109 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.286985 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.300520 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.317121 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.331739 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.374471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.374514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.374522 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.374538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.374548 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.477578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.477625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.477637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.477654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.477664 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.580095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.580156 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.580169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.580188 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.580199 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.685667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.685712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.685726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.685744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.685754 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.788450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.788506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.788515 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.788530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.788538 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.891004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.891057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.891068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.891087 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.891101 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.993584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.993627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.993636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.993654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:36 crc kubenswrapper[4828]: I1129 07:01:36.993664 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:36Z","lastTransitionTime":"2025-11-29T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.085853 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5836996-65fd-4b24-b757-269259483919" containerID="55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb" exitCode=0 Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.085954 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerDied","Data":"55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.095947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.096007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.096020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.096036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.096048 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.103330 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.123711 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.137555 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.153695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.165865 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.186148 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200580 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.200782 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.218200 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.232485 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.249773 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.265813 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.280789 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.294067 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.302753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.302806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.302821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.302847 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.302860 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.398820 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.405784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.405824 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.405836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.405852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.405863 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.411687 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.411844 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:37 crc kubenswrapper[4828]: E1129 07:01:37.411978 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.412210 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.412360 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:37 crc kubenswrapper[4828]: E1129 07:01:37.412517 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:37 crc kubenswrapper[4828]: E1129 07:01:37.412716 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.508781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.508812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.508821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.508834 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.508844 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.611525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.611664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.611986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.612032 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.612045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.715203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.715247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.715259 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.715303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.715315 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.819928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.820158 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.820223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.820320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.820415 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.926690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.926742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.926782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.926799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:37 crc kubenswrapper[4828]: I1129 07:01:37.926812 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:37Z","lastTransitionTime":"2025-11-29T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.029597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.029659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.029668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.029685 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.029697 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.095087 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" event={"ID":"c5836996-65fd-4b24-b757-269259483919","Type":"ContainerStarted","Data":"5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.121627 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.132837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.132887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.132900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.132922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.132937 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.136896 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.180151 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.201405 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.221603 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.233089 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.235187 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.235241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.235254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.235284 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.235299 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.247971 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.264696 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.283115 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.320215 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.334527 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.337227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.337296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.337311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.337328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.337339 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.346229 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.359930 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.371655 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.389812 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.439707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.439773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.439795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.439822 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.439843 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.542720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.542781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.542798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.542815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.542826 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.645480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.645532 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.645543 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.645558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.645570 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.748733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.748781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.748790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.748807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.748817 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.851801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.851861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.851873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.851892 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.851905 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.954558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.954593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.954604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.954620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:38 crc kubenswrapper[4828]: I1129 07:01:38.954632 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:38Z","lastTransitionTime":"2025-11-29T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.056889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.056924 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.056933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.056950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.056966 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.099988 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/0.log" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.103054 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5" exitCode=1 Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.103108 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.103875 4828 scope.go:117] "RemoveContainer" containerID="30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.128049 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.141682 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.160519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.160561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.160574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.160592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.160604 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.162401 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.183720 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.196695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.210990 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.226491 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.240516 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.252638 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.264163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.264203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.264212 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.264228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.264237 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.266365 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.282980 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.299885 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.313808 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.329745 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.342347 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.367747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.367792 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.367803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.367822 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.367836 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.410827 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.410916 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.410828 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:39 crc kubenswrapper[4828]: E1129 07:01:39.411083 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:39 crc kubenswrapper[4828]: E1129 07:01:39.411258 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:39 crc kubenswrapper[4828]: E1129 07:01:39.411401 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.470041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.470073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.470084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.470099 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.470111 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.572649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.572974 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.573068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.573157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.573249 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.677413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.677451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.677460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.677478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.677487 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.779530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.779815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.779922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.779993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.780056 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.882672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.882720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.882731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.882747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.882759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.985568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.985604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.985612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.985626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:39 crc kubenswrapper[4828]: I1129 07:01:39.985635 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:39Z","lastTransitionTime":"2025-11-29T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.088026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.088298 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.088396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.088499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.088585 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.193374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.193424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.193433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.193449 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.193459 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.295808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.296158 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.296291 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.296419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.296500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.398750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.398800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.398814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.398834 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.398848 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.501491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.501531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.501542 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.501558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.501571 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.603950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.603989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.604001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.604018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.604027 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.706499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.706533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.706546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.706560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.706571 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.810314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.810358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.810368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.810381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.810394 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.912667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.913000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.913088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.913155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:40 crc kubenswrapper[4828]: I1129 07:01:40.913227 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:40Z","lastTransitionTime":"2025-11-29T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.016160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.016506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.016740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.016908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.017056 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.071991 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.072459 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:01:57.072407447 +0000 UTC m=+56.694483505 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.072752 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.073029 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.072947 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.073443 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:57.07343013 +0000 UTC m=+56.695506188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.073229 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.074365 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:57.074327851 +0000 UTC m=+56.696403919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.119333 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.119380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.119393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.119410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.119422 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.162342 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj"] Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.163004 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.165408 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.165892 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.173807 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99lc2\" (UniqueName: \"kubernetes.io/projected/959bd1c3-fd44-4090-996b-6539586c31ba-kube-api-access-99lc2\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.173875 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.173901 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/959bd1c3-fd44-4090-996b-6539586c31ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.173943 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.173977 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.174004 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174161 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174190 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174214 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174263 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:57.174246629 +0000 UTC m=+56.796322687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174161 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174364 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174393 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.174471 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:01:57.174449034 +0000 UTC m=+56.796525132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.175798 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.194128 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.206646 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.218458 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.222435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.222501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.222517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.222540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.222555 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.231066 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.246246 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.265003 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.274780 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.274821 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/959bd1c3-fd44-4090-996b-6539586c31ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.274865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.274914 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99lc2\" (UniqueName: \"kubernetes.io/projected/959bd1c3-fd44-4090-996b-6539586c31ba-kube-api-access-99lc2\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.275467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.275663 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/959bd1c3-fd44-4090-996b-6539586c31ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.276337 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.280871 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/959bd1c3-fd44-4090-996b-6539586c31ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.294884 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99lc2\" (UniqueName: \"kubernetes.io/projected/959bd1c3-fd44-4090-996b-6539586c31ba-kube-api-access-99lc2\") pod \"ovnkube-control-plane-749d76644c-cv4sj\" (UID: \"959bd1c3-fd44-4090-996b-6539586c31ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.298803 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.315059 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.325205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.325245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.325255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.325297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.325310 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.331096 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.345825 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.359558 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.369579 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.380842 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.393964 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.411207 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.411228 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.411311 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.411436 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.411709 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:41 crc kubenswrapper[4828]: E1129 07:01:41.412136 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.426857 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.428582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.428628 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.428638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.428655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.428665 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.437394 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.459817 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.476259 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.478481 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" Nov 29 07:01:41 crc kubenswrapper[4828]: W1129 07:01:41.493546 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959bd1c3_fd44_4090_996b_6539586c31ba.slice/crio-53261b8be2424eb920f386994d87505b8484dd84690f17d2ab465d4208f8e342 WatchSource:0}: Error finding container 53261b8be2424eb920f386994d87505b8484dd84690f17d2ab465d4208f8e342: Status 404 returned error can't find the container with id 53261b8be2424eb920f386994d87505b8484dd84690f17d2ab465d4208f8e342 Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.493820 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.507231 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.526923 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.531712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.531776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.531787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.531803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.531815 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.543623 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.567823 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.578241 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.589669 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.599152 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.609623 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.622400 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.632866 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.633771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.633950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.634055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.634163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.634339 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.652949 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.737561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.737803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.737872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.737947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.738027 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.840583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.840862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.840932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.841008 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.841070 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.944239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.944345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.944363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.944387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:41 crc kubenswrapper[4828]: I1129 07:01:41.944405 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:41Z","lastTransitionTime":"2025-11-29T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.047477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.047522 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.047533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.047549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.047561 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.112728 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" event={"ID":"959bd1c3-fd44-4090-996b-6539586c31ba","Type":"ContainerStarted","Data":"53261b8be2424eb920f386994d87505b8484dd84690f17d2ab465d4208f8e342"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.150898 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.150939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.150950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.150983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.150995 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.252777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.252825 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.252836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.252852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.252861 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.280452 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4ffn6"] Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.280988 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.281065 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.298556 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.312966 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.341155 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.355383 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.356373 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.356413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.356421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.356451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.356460 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.365745 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.378221 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.387196 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpg8n\" (UniqueName: \"kubernetes.io/projected/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-kube-api-access-dpg8n\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.387254 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.389793 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.417547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.417593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.417612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.417629 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.417640 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.418306 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.431344 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.432874 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.435976 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.436010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.436023 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.436043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.436055 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.445222 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.451577 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.455751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.455794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.455804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.455826 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.455836 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.461654 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.470360 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.480145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.480204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.480215 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.480233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.480244 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.482678 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.488478 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.488556 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpg8n\" (UniqueName: \"kubernetes.io/projected/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-kube-api-access-dpg8n\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.488692 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.488805 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:01:42.988777696 +0000 UTC m=+42.610853754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.494023 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.498425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.498464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.498477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.498499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.498517 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.506166 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpg8n\" (UniqueName: \"kubernetes.io/projected/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-kube-api-access-dpg8n\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.506588 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.510061 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.510185 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.513408 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.513433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.513444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.513458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.513468 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.520005 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.535693 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.548899 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.563897 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.615739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.615824 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.615846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.615862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.615871 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.718922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.719066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.719164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.719250 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.719428 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.822866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.822910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.822920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.822940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.822953 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.925651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.925685 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.925694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.925713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.925724 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:42Z","lastTransitionTime":"2025-11-29T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:42 crc kubenswrapper[4828]: I1129 07:01:42.994380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.994654 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:42 crc kubenswrapper[4828]: E1129 07:01:42.994725 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:01:43.994706199 +0000 UTC m=+43.616782257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.028014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.028074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.028082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.028152 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.028166 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.118375 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/0.log" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.121308 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.122800 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" event={"ID":"959bd1c3-fd44-4090-996b-6539586c31ba","Type":"ContainerStarted","Data":"72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.130997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.131051 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.131063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.131082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.131094 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.238621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.238709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.238730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.238752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.238771 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.342170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.342225 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.342238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.342256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.342287 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.413760 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:43 crc kubenswrapper[4828]: E1129 07:01:43.413913 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.414426 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.414445 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:43 crc kubenswrapper[4828]: E1129 07:01:43.414534 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:43 crc kubenswrapper[4828]: E1129 07:01:43.414606 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.444809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.444864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.444875 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.444901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.444937 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.547839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.547909 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.547932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.547956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.547972 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.653227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.653871 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.653884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.653908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.653922 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.756019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.756043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.756050 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.756063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.756072 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.858261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.858323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.858336 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.858354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.858366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.960768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.960809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.960821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.960839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:43 crc kubenswrapper[4828]: I1129 07:01:43.960856 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:43Z","lastTransitionTime":"2025-11-29T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.004795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:44 crc kubenswrapper[4828]: E1129 07:01:44.004985 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:44 crc kubenswrapper[4828]: E1129 07:01:44.005080 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:01:46.005059972 +0000 UTC m=+45.627136030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.063620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.063742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.063761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.063780 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.063793 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.129296 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.160400 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.165914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.165959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.165971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.165993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.166004 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.178873 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.192478 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.205766 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.218565 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.229231 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.244083 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.254680 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.267966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.268007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.268017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.268033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.268045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.279682 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.297107 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.314682 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.327925 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.345375 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.360038 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.370192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.370290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.370305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.370324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.370339 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.372221 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.384016 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.399606 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.410999 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:44 crc kubenswrapper[4828]: E1129 07:01:44.411155 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.412447 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.478127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.478286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.479175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.479292 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.479311 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.586497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.586554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.586564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.586581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.586592 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.689230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.689281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.689294 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.689309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.689319 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.791965 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.792000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.792009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.792024 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.792033 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.894985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.895365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.895378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.895425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.895443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.998224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.998328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.998342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.998361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:44 crc kubenswrapper[4828]: I1129 07:01:44.998373 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:44Z","lastTransitionTime":"2025-11-29T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.104594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.104635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.104753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.104775 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.104785 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.134760 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/1.log" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.135389 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/0.log" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.138149 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345" exitCode=1 Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.138224 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.138329 4828 scope.go:117] "RemoveContainer" containerID="30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.138807 4828 scope.go:117] "RemoveContainer" containerID="e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345" Nov 29 07:01:45 crc kubenswrapper[4828]: E1129 07:01:45.139005 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.141404 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.143187 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.143763 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.145258 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" event={"ID":"959bd1c3-fd44-4090-996b-6539586c31ba","Type":"ContainerStarted","Data":"f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.152766 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.166130 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.177862 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.194687 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.207795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.207844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.207854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.207869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.207880 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.208449 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.222147 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.232534 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.242013 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.252589 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.265123 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.286102 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.301358 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.310054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.310092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.310104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.310120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.310131 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.346983 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.412774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:45 crc kubenswrapper[4828]: E1129 07:01:45.412920 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.412816 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:45 crc kubenswrapper[4828]: E1129 07:01:45.412991 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.412774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:45 crc kubenswrapper[4828]: E1129 07:01:45.413043 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.414453 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.414549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.414563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.414581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.414596 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.434376 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.448732 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.461355 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.475212 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.494841 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.507084 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.518746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.518784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.518793 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.518808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.518818 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.524628 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.536254 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.548787 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.558981 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.572058 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.585704 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.596050 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.610456 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.621876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.621924 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.621941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.621961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.621976 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.628165 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.650885 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.660551 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.670074 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.682102 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.693533 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.711798 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30eb7ec2201adb40819113a5278c7c3d47eda98b26dc13e16fec7e03443e4af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:01:38.406635 6066 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:01:38.406658 6066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:01:38.406664 6066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:01:38.406682 6066 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 07:01:38.406706 6066 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1129 07:01:38.406711 6066 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1129 07:01:38.406723 6066 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:01:38.406734 6066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:01:38.406736 6066 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1129 07:01:38.406753 6066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:01:38.406760 6066 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:01:38.406735 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 07:01:38.406759 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 07:01:38.406749 6066 factory.go:656] Stopping watch factory\\\\nI1129 07:01:38.406764 6066 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.723927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.723964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.723973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.723987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.723996 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.826178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.826211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.826219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.826234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.826244 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.929276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.929319 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.929335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.929353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:45 crc kubenswrapper[4828]: I1129 07:01:45.929366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:45Z","lastTransitionTime":"2025-11-29T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.023749 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:46 crc kubenswrapper[4828]: E1129 07:01:46.023890 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:46 crc kubenswrapper[4828]: E1129 07:01:46.023960 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:01:50.023941584 +0000 UTC m=+49.646017642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.031569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.031609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.031617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.031634 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.031647 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.134250 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.134302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.134312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.134348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.134362 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.150702 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/1.log" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.155902 4828 scope.go:117] "RemoveContainer" containerID="e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345" Nov 29 07:01:46 crc kubenswrapper[4828]: E1129 07:01:46.156123 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.179342 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.194039 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.207037 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.219121 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.231853 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.235873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.235914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.235928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.235950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.235965 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.245571 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.257110 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.274293 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.287903 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.297787 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.315938 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.328950 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.338596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.338637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.338646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.338662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.338671 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.342785 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.354378 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.366408 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.384856 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.395592 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.411223 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:46 crc kubenswrapper[4828]: E1129 07:01:46.411418 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.440930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.440977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.440993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.441017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.441029 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.543407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.543446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.543456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.543471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.543482 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.645801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.645855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.645867 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.645890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.645909 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.748710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.749038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.749143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.749312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.749389 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.852483 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.852533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.852552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.852569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.852581 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.955794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.955841 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.955860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.955883 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:46 crc kubenswrapper[4828]: I1129 07:01:46.955896 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:46Z","lastTransitionTime":"2025-11-29T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.058240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.058280 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.058291 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.058316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.058329 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.162740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.162785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.162795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.162813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.162827 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.265495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.265553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.265564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.265583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.265597 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.367712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.368401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.368446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.368475 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.368486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.411213 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.411227 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.411252 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:47 crc kubenswrapper[4828]: E1129 07:01:47.411758 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:47 crc kubenswrapper[4828]: E1129 07:01:47.411827 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:47 crc kubenswrapper[4828]: E1129 07:01:47.411888 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.471255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.471336 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.471353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.471377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.471394 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.577597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.577673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.577682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.577706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.577725 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.680349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.680393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.680410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.680436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.680445 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.782869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.783229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.783251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.783287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.783299 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.885737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.885786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.885798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.885816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.885827 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.988388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.988688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.988782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.988944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:47 crc kubenswrapper[4828]: I1129 07:01:47.989048 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:47Z","lastTransitionTime":"2025-11-29T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.118724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.118781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.118794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.118812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.118823 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.221402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.221437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.221450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.221468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.221481 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.324303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.324356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.324368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.324390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.324403 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.411610 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:48 crc kubenswrapper[4828]: E1129 07:01:48.411957 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.427099 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.427150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.427159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.427179 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.427188 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.529544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.529583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.529594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.529611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.529621 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.631830 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.631864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.631873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.631886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.631900 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.735306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.735343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.735353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.735370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.735382 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.838355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.838644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.838726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.838801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.838883 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.942190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.942251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.942260 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.942293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:48 crc kubenswrapper[4828]: I1129 07:01:48.942303 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:48Z","lastTransitionTime":"2025-11-29T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.044214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.044249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.044258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.044331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.044390 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.147336 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.147588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.147704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.147803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.147885 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.252213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.252261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.252288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.252313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.252326 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.354384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.354430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.354454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.354472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.354483 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.411077 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.411091 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.411242 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:49 crc kubenswrapper[4828]: E1129 07:01:49.411375 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:49 crc kubenswrapper[4828]: E1129 07:01:49.411517 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:49 crc kubenswrapper[4828]: E1129 07:01:49.411666 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.457472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.457525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.457542 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.457567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.457579 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.560141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.560181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.560193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.560208 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.560219 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.662645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.662678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.662688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.662701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.662712 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.765815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.765858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.765869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.765884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.765896 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.867846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.867894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.867907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.867925 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.867937 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.970658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.970706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.970719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.970740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:49 crc kubenswrapper[4828]: I1129 07:01:49.970753 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:49Z","lastTransitionTime":"2025-11-29T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.072927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.072984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.072997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.073017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.073032 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.104131 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:50 crc kubenswrapper[4828]: E1129 07:01:50.104461 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:50 crc kubenswrapper[4828]: E1129 07:01:50.104589 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:01:58.104543541 +0000 UTC m=+57.726619599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.176048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.176106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.176121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.176139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.176151 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.278796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.278835 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.278848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.278865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.278877 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.381907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.381961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.381974 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.382037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.382052 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.411559 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:50 crc kubenswrapper[4828]: E1129 07:01:50.411808 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.484479 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.484528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.484537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.484554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.484564 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.587059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.587112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.587121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.587134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.587147 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.689827 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.689877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.689890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.689910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.689925 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.792932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.792989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.793000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.793018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.793029 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.896711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.896755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.896767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.896786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.896797 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.999329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.999360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.999368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.999384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:50 crc kubenswrapper[4828]: I1129 07:01:50.999393 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:50Z","lastTransitionTime":"2025-11-29T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.101983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.102022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.102039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.102058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.102070 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.204281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.204322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.204331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.204348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.204361 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.307958 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.307991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.308004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.308021 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.308032 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.410655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.410698 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:51 crc kubenswrapper[4828]: E1129 07:01:51.410791 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:51 crc kubenswrapper[4828]: E1129 07:01:51.410885 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.410941 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:51 crc kubenswrapper[4828]: E1129 07:01:51.411058 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.411219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.411245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.411297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.411314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.411325 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.429980 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.443354 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.456316 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.469559 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.481688 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.502165 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.514244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.514345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.514358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.514403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.514417 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.517877 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.529501 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.543634 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.557401 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.571023 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.586241 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.600002 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.613786 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.616775 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.616812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.616825 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.616841 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.616852 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.636647 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.653061 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.672149 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.720381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.720430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.720441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.720460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.720471 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.823747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.823782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.823790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.823805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.823816 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.926039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.926117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.926127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.926144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:51 crc kubenswrapper[4828]: I1129 07:01:51.926158 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:51Z","lastTransitionTime":"2025-11-29T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.029221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.029285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.029298 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.029317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.029331 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.132422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.132681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.132813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.132903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.133011 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.235882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.236183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.236364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.236495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.236631 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.339193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.339506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.339579 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.339652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.339731 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.411120 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.411321 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.441935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.442015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.442027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.442050 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.442063 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.544309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.544353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.544364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.544380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.544391 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.614655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.614717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.614733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.614754 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.614768 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.629846 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.637730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.637771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.637782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.637800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.637812 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.651789 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.654424 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.659654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.659693 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.659705 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.659724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.659736 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.660897 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.666682 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.674552 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.678527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.678557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.678568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.678582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.678598 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.682821 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.691599 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.695413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.695447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.695492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.695518 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.695533 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.704345 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.707213 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: E1129 07:01:52.707365 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.709424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.709452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.709463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.709477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.709486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.719694 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.736067 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.748637 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.771531 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.784628 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.796433 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.808672 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.811948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.811982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.811994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.812011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.812021 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.824722 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.836191 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.850159 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.865459 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.880762 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.894182 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.908865 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.914589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.914618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.914628 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.914642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:52 crc kubenswrapper[4828]: I1129 07:01:52.914651 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:52Z","lastTransitionTime":"2025-11-29T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.016725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.016761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.016772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.016788 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.016799 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:53Z","lastTransitionTime":"2025-11-29T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.714482 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:53 crc kubenswrapper[4828]: E1129 07:01:53.714672 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715047 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:53 crc kubenswrapper[4828]: E1129 07:01:53.715113 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715556 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:53 crc kubenswrapper[4828]: E1129 07:01:53.715695 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715821 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715906 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:53 crc kubenswrapper[4828]: E1129 07:01:53.715906 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.715930 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:53Z","lastTransitionTime":"2025-11-29T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.820724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.820783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.820807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.820831 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.820845 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:53Z","lastTransitionTime":"2025-11-29T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.923559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.923603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.923613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.923630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:53 crc kubenswrapper[4828]: I1129 07:01:53.923641 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:53Z","lastTransitionTime":"2025-11-29T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.025970 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.026038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.026052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.026073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.026087 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.128689 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.128732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.128742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.128758 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.128766 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.231906 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.231965 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.231979 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.231996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.232009 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.334479 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.334525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.334534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.334550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.334560 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.437653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.437701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.437714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.437733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.437744 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.539936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.539979 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.539990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.540007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.540018 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.631720 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.644641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.644692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.644703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.644726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.644739 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.645904 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.657423 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.675663 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.687632 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.701938 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.712410 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.735531 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.747512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.747557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.747568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.747585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.747599 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.752003 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.766418 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.780671 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.793420 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.808136 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.817811 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.828552 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.842988 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.850324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.850375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.850385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.850398 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.850408 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.855977 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.869172 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.882065 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:01:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.953465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.953541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.953553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.953572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:54 crc kubenswrapper[4828]: I1129 07:01:54.953583 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:54Z","lastTransitionTime":"2025-11-29T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.055989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.056031 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.056040 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.056061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.056080 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.158091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.158137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.158147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.158162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.158172 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.261281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.261327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.261338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.261353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.261366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.363677 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.363706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.363714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.363730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.363749 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.410742 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.410832 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.410860 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:55 crc kubenswrapper[4828]: E1129 07:01:55.410919 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:55 crc kubenswrapper[4828]: E1129 07:01:55.410987 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.411054 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:55 crc kubenswrapper[4828]: E1129 07:01:55.411170 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:55 crc kubenswrapper[4828]: E1129 07:01:55.411281 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.466061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.466115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.466125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.466141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.466152 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.569003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.569046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.569058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.569076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.569091 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.672322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.672367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.672377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.672395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.672405 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.774962 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.775004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.775013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.775029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.775041 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.878543 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.878598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.878607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.878628 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.878639 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.981030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.981077 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.981087 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.981100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:55 crc kubenswrapper[4828]: I1129 07:01:55.981109 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:55Z","lastTransitionTime":"2025-11-29T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.084695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.084748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.084763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.084786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.084805 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.187362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.187400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.187410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.187425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.187435 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.290331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.290373 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.290386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.290403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.290415 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.392411 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.392458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.392470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.392487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.392500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.495779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.495827 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.495836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.495853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.495864 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.598176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.598217 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.598226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.598239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.598247 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.700196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.700231 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.700240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.700252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.700262 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.803504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.803560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.803572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.803603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.803617 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.906092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.906144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.906155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.906169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:56 crc kubenswrapper[4828]: I1129 07:01:56.906177 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:56Z","lastTransitionTime":"2025-11-29T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.008822 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.008866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.008876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.008892 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.008901 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.082120 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.082232 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.082288 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:02:29.08225785 +0000 UTC m=+88.704333908 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.082320 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.082501 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.082561 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:02:29.082547467 +0000 UTC m=+88.704623525 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.082561 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.082618 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:02:29.082605528 +0000 UTC m=+88.704681586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.111748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.111801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.111814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.111829 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.111839 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.183619 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.183656 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.183865 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.183908 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.183929 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.183865 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.184004 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.184016 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.183985 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:02:29.183968419 +0000 UTC m=+88.806044487 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.184131 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:02:29.184082832 +0000 UTC m=+88.806158900 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.214317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.214399 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.214412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.214434 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.214443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.316816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.316855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.316868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.316883 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.316894 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.411449 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.411586 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.411709 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.411724 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.411742 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.411865 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.412433 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:57 crc kubenswrapper[4828]: E1129 07:01:57.412367 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.418376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.418417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.418429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.418448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.418460 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.521348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.521393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.521404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.521419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.521428 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.624100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.624132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.624143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.624160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.624174 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.733662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.733726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.733746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.733776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.733799 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.836316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.836383 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.836392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.836405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.836414 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.939251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.939313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.939325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.939342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:57 crc kubenswrapper[4828]: I1129 07:01:57.939353 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:57Z","lastTransitionTime":"2025-11-29T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.044525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.044558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.044566 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.044580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.044589 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.146844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.146879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.146889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.146905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.146916 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.193817 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:58 crc kubenswrapper[4828]: E1129 07:01:58.193984 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:58 crc kubenswrapper[4828]: E1129 07:01:58.194051 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:02:14.194033645 +0000 UTC m=+73.816109723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.249590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.249627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.249634 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.249649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.249660 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.352367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.352405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.352417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.352434 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.352444 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.413008 4828 scope.go:117] "RemoveContainer" containerID="e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.454137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.454239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.454258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.454321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.454340 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.557176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.557226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.557237 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.557257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.557283 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.659770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.659826 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.659843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.659866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.659882 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.762666 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.762715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.762730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.762747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.762759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.865421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.865461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.865473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.865494 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.865506 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.968918 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.968954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.968967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.968991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:58 crc kubenswrapper[4828]: I1129 07:01:58.969014 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:58Z","lastTransitionTime":"2025-11-29T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.071800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.071840 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.071849 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.071867 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.071878 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.174827 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.174893 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.174920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.174941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.174952 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.277824 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.277860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.277869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.277882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.277892 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.380192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.380227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.380237 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.380253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.380263 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.410993 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.411033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.411050 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:01:59 crc kubenswrapper[4828]: E1129 07:01:59.411160 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:01:59 crc kubenswrapper[4828]: E1129 07:01:59.411322 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:01:59 crc kubenswrapper[4828]: E1129 07:01:59.411417 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.411616 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:01:59 crc kubenswrapper[4828]: E1129 07:01:59.411773 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.483361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.483799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.483819 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.483945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.483977 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.587806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.587853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.587869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.587896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.587908 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.690290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.690333 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.690341 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.690359 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.690369 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.792454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.792504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.792519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.792536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.792549 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.896872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.896923 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.896938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.896956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.896973 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.999102 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.999154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.999166 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.999180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:01:59 crc kubenswrapper[4828]: I1129 07:01:59.999190 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:01:59Z","lastTransitionTime":"2025-11-29T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.101468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.101504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.101512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.101525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.101534 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.204140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.204202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.204221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.204246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.204265 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.306728 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.306801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.306824 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.306855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.306877 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.409343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.409393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.409405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.409425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.409441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.516542 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.516682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.516709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.516731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.516747 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.620897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.620964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.620980 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.621001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.621013 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.723756 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.723805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.723822 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.723842 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.723858 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.827066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.827123 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.827143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.827164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.827179 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.930058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.930111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.930122 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.930140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:00 crc kubenswrapper[4828]: I1129 07:02:00.930154 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:00Z","lastTransitionTime":"2025-11-29T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.035743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.035791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.035799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.035828 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.035854 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.138911 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.138984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.139021 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.139054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.139075 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.242187 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.242245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.242260 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.242304 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.242322 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.344358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.344414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.344425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.344443 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.344467 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.411195 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.411193 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.411193 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.411185 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:01 crc kubenswrapper[4828]: E1129 07:02:01.411705 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:01 crc kubenswrapper[4828]: E1129 07:02:01.411785 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:01 crc kubenswrapper[4828]: E1129 07:02:01.411906 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:01 crc kubenswrapper[4828]: E1129 07:02:01.412061 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.432383 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449379 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.449392 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.459925 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.477867 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.489043 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.502530 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.520255 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.535044 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.552368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.552406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.552417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.552432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.552441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.560868 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.575406 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.615730 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.637164 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.651762 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.654541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.654581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.654592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.654609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.654621 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.666102 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.678860 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.692664 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.703565 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.730773 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.756938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.757128 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.757255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.757389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.757474 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.860898 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.860939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.860950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.860966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.860978 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.963862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.963938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.963956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.964426 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:01 crc kubenswrapper[4828]: I1129 07:02:01.964587 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:01Z","lastTransitionTime":"2025-11-29T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.067430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.067479 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.067520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.067541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.067553 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.169204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.169234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.169241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.169256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.169278 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.271797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.272095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.272105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.272120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.272130 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.373943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.373975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.373984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.373999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.374012 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.476992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.477049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.477061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.477080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.477093 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.579282 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.579328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.579338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.579354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.579366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.681966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.682008 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.682017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.682032 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.682044 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.747406 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/1.log" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.748833 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.750075 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.767253 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.800547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.800585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.800594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.800608 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.800619 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.802560 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.814379 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.827299 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.839924 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.852050 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.861169 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.872127 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.886068 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902290 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.902567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.923845 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.947829 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.957086 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.969858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.969907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.969916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.969932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.969941 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.977375 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: E1129 07:02:02.986068 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.989372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.989411 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.989422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.989439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.989450 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:02Z","lastTransitionTime":"2025-11-29T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:02 crc kubenswrapper[4828]: I1129 07:02:02.995295 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.002689 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.006644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.006681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.006692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.006709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.006720 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.010541 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.019386 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.021608 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.022432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.022456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.022463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.022477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.022488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.042498 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.050591 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.051933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.051968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.051978 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.051993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.052003 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.070657 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.070776 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.072339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.072361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.072369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.072382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.072391 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.175398 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.175439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.175450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.175466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.175478 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.278437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.278471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.278480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.278494 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.278503 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.381316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.381358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.381370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.381385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.381396 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.411052 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.411235 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.411260 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.411322 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.411399 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.411429 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.411571 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:03 crc kubenswrapper[4828]: E1129 07:02:03.411715 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.483161 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.483189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.483197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.483211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.483239 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.586151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.586207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.586219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.586238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.586250 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.688473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.688533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.688545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.688563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.688574 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.790640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.790684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.790694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.790709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.790720 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.893049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.893101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.893114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.893132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.893145 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.995795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.995837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.995846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.995863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:03 crc kubenswrapper[4828]: I1129 07:02:03.995872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:03Z","lastTransitionTime":"2025-11-29T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.099418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.099477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.099493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.099514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.099530 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.202650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.202721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.202738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.202762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.202780 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.306349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.306406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.306422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.306443 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.306463 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.408452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.408541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.408556 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.408572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.408589 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.511199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.511293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.511311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.511331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.511345 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.614366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.614397 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.614405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.614418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.614426 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.716403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.716455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.716467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.716482 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.716493 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.819593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.819660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.819680 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.819706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.819725 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.921855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.921893 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.921900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.921914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:04 crc kubenswrapper[4828]: I1129 07:02:04.921924 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:04Z","lastTransitionTime":"2025-11-29T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.024144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.024198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.024210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.024227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.024240 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.126582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.126626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.126637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.126652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.126662 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.230042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.230120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.230151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.230170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.230189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.333036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.333080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.333092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.333110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.333122 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.411525 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.411568 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.411582 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.411608 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:05 crc kubenswrapper[4828]: E1129 07:02:05.411707 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:05 crc kubenswrapper[4828]: E1129 07:02:05.411816 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:05 crc kubenswrapper[4828]: E1129 07:02:05.411963 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:05 crc kubenswrapper[4828]: E1129 07:02:05.412042 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.435866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.435921 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.435932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.435950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.435963 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.539101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.539149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.539159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.539180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.539191 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.642056 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.642091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.642100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.642116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.642128 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.745475 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.745512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.745522 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.745538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.745593 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.760004 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/2.log" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.760659 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/1.log" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.764642 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57" exitCode=1 Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.764707 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.764789 4828 scope.go:117] "RemoveContainer" containerID="e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.765711 4828 scope.go:117] "RemoveContainer" containerID="9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57" Nov 29 07:02:05 crc kubenswrapper[4828]: E1129 07:02:05.765896 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.784045 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.798728 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.810992 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.829170 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.844252 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.848101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.848146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.848158 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.848176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.848189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.858525 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.869095 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.880994 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.897335 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.910661 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.921635 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.934439 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.945301 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.951658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.951704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.951727 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.951752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.951766 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:05Z","lastTransitionTime":"2025-11-29T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.957846 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.968897 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.982660 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:05 crc kubenswrapper[4828]: I1129 07:02:05.994286 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.012059 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.054141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.054197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.054207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.054222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.054232 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.157338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.157389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.157400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.157420 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.157436 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.260082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.260127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.260138 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.260153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.260164 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.362745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.362781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.362791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.362804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.362814 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.469021 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.469193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.469209 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.469237 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.469261 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.572725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.572778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.572787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.572803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.572816 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.676161 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.676204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.676216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.676242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.676255 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.776260 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/2.log" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.777956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.777992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.778020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.778037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.778047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.880673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.880718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.880730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.880746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.880758 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.984310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.984361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.984375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.984407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:06 crc kubenswrapper[4828]: I1129 07:02:06.984420 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:06Z","lastTransitionTime":"2025-11-29T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.087386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.087432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.087445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.087460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.087470 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.190244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.190297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.190307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.190320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.190329 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.293079 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.293125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.293137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.293157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.293171 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.396304 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.396370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.396380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.396400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.396412 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.411929 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.411929 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.412212 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.412308 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:07 crc kubenswrapper[4828]: E1129 07:02:07.412424 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:07 crc kubenswrapper[4828]: E1129 07:02:07.412484 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:07 crc kubenswrapper[4828]: E1129 07:02:07.412569 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:07 crc kubenswrapper[4828]: E1129 07:02:07.412732 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.498994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.499047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.499066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.499088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.499103 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.601669 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.601722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.601734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.601755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.601767 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.704091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.704127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.704137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.704150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.704160 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.806704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.806744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.806752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.806766 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.806774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.909305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.909576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.909659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.909820 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:07 crc kubenswrapper[4828]: I1129 07:02:07.909896 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:07Z","lastTransitionTime":"2025-11-29T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.013244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.013352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.013365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.013382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.013392 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.116225 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.116297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.116311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.116325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.116335 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.218956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.219239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.219322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.219394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.219471 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.323407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.323736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.323837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.323940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.324044 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.427015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.427059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.427070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.427091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.427103 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.529559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.529627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.529641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.529658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.529668 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.633188 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.633465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.633565 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.633701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.633803 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.736247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.736324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.736339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.736365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.736384 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.838720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.839017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.839106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.839314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.839404 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.947567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.947615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.947668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.947690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:08 crc kubenswrapper[4828]: I1129 07:02:08.947700 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:08Z","lastTransitionTime":"2025-11-29T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.049853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.049882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.049891 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.049906 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.049917 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.152432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.152484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.152499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.152519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.152532 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.254944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.254977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.254986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.255000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.255008 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.357636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.357733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.357759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.357794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.357823 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.411202 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:09 crc kubenswrapper[4828]: E1129 07:02:09.411408 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.411863 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:09 crc kubenswrapper[4828]: E1129 07:02:09.411944 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.412005 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:09 crc kubenswrapper[4828]: E1129 07:02:09.412058 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.412221 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:09 crc kubenswrapper[4828]: E1129 07:02:09.412340 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.461326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.461382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.461406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.461425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.461439 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.564552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.564582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.564591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.564607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.564620 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.667396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.667426 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.667436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.667450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.667460 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.771118 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.771175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.771188 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.771207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.771220 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.875028 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.875088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.875101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.875120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.875134 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.977421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.977478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.977489 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.977506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:09 crc kubenswrapper[4828]: I1129 07:02:09.977520 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:09Z","lastTransitionTime":"2025-11-29T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.080244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.080305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.080334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.080351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.080362 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.182995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.183055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.183067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.183083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.183092 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.285242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.285300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.285309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.285324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.285336 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.387908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.387963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.387975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.387994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.388006 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.490436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.490478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.490486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.490501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.490510 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.592460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.592493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.592501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.592518 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.592527 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.695105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.695156 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.695167 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.695184 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.695196 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.797232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.797288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.797303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.797326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.797338 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.899525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.899565 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.899573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.899589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:10 crc kubenswrapper[4828]: I1129 07:02:10.899599 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:10Z","lastTransitionTime":"2025-11-29T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.002146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.002190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.002235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.002251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.002260 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.104577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.104641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.104654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.104671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.104680 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.206788 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.206830 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.206842 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.206858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.206870 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.309681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.309734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.309750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.309778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.309795 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.410911 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.411285 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.411301 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:11 crc kubenswrapper[4828]: E1129 07:02:11.411434 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.411499 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:11 crc kubenswrapper[4828]: E1129 07:02:11.411578 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:11 crc kubenswrapper[4828]: E1129 07:02:11.411351 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:11 crc kubenswrapper[4828]: E1129 07:02:11.411697 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.412752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.412808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.412825 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.412846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.412863 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.439645 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.456677 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.480985 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.496528 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.511412 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.515535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.515582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.515594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.515610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.515621 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.525971 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.540848 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.564180 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.579776 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.596959 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.610866 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.618204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.618243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.618256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.618285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.618299 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.625815 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.640660 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.650821 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.662247 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.677777 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.695644 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.706998 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.721716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.721772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.721785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.721804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.721816 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.824077 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.824133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.824147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.824167 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.824179 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.926589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.926641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.926654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.926674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:11 crc kubenswrapper[4828]: I1129 07:02:11.926686 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:11Z","lastTransitionTime":"2025-11-29T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.028691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.028730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.028739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.028755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.028765 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.131061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.131106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.131116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.131131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.131143 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.232951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.232998 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.233039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.233054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.233065 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.335557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.335605 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.335619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.335638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.335651 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.438459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.438520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.438537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.438563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.438603 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.541392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.541433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.541445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.541462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.541473 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.649704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.649831 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.649852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.649880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.649911 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.754172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.754295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.754490 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.754524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.754538 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.857405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.857438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.857447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.857475 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.857487 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.960633 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.960675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.960685 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.960700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:12 crc kubenswrapper[4828]: I1129 07:02:12.960712 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:12Z","lastTransitionTime":"2025-11-29T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.062975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.063031 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.063048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.063063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.063074 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.165823 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.165856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.165863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.165877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.165886 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.203504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.203781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.203902 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.204005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.204094 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.216324 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.220630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.220664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.220672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.220686 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.220696 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.231668 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.235179 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.235207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.235217 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.235233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.235244 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.248018 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.251490 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.251524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.251555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.251571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.251581 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.264156 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.267568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.267624 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.267635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.267651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.267662 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.281477 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.281589 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.283278 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.283426 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.283494 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.283558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.283633 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.386476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.386523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.386535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.386550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.386559 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.411678 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.411773 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.411959 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.412038 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.412344 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.412406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.412428 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:13 crc kubenswrapper[4828]: E1129 07:02:13.412563 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.488880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.488933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.488945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.488962 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.488974 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.591139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.591174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.591184 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.591200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.591211 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.693863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.694224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.694360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.694454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.694554 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.797630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.797672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.797683 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.797699 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.797710 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.900400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.900437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.900447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.900463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:13 crc kubenswrapper[4828]: I1129 07:02:13.900473 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:13Z","lastTransitionTime":"2025-11-29T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.002910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.003250 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.003354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.003461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.003541 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.106249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.106311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.106320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.106335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.106345 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.208747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.208789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.208798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.208814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.208824 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.220253 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:14 crc kubenswrapper[4828]: E1129 07:02:14.220490 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:02:14 crc kubenswrapper[4828]: E1129 07:02:14.220587 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:02:46.220554252 +0000 UTC m=+105.842630310 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.311529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.311584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.311598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.311616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.311628 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.415247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.415348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.415365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.415384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.415404 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.517916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.517965 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.517975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.517990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.518001 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.620360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.620394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.620402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.620415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.620443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.723660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.723749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.723771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.723838 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.723863 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.826466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.826538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.826550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.826577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.826593 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.929111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.929228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.929244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.929286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:14 crc kubenswrapper[4828]: I1129 07:02:14.929299 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:14Z","lastTransitionTime":"2025-11-29T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.032498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.032546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.032555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.032574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.032587 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.135251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.135343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.135363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.135387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.135404 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.238463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.238531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.238547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.238566 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.238583 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.341809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.342819 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.342843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.342859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.342871 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.410947 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.410973 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.411016 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.411028 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:15 crc kubenswrapper[4828]: E1129 07:02:15.411088 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:15 crc kubenswrapper[4828]: E1129 07:02:15.411233 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:15 crc kubenswrapper[4828]: E1129 07:02:15.411260 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:15 crc kubenswrapper[4828]: E1129 07:02:15.411344 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.445058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.445096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.445107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.445124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.445137 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.548211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.548292 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.548303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.548318 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.548328 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.651859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.651964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.651977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.652055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.652103 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.754641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.754674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.754684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.754700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.754712 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.858132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.858213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.858246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.858288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.858302 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.960591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.960708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.960725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.960752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:15 crc kubenswrapper[4828]: I1129 07:02:15.960800 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:15Z","lastTransitionTime":"2025-11-29T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.064806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.064864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.064880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.064904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.064920 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.167409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.167476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.167487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.167501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.167510 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.270701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.270747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.270760 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.270777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.270787 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.373536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.373591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.373603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.373621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.373634 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.475517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.475554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.475564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.475579 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.475589 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.577707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.577750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.577759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.577773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.577784 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.680404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.680452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.680461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.680477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.680486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.782157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.782197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.782206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.782223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.782234 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.812770 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/0.log" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.812844 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3a37050-181c-42b4-acf9-dc458a0f5bcf" containerID="77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8" exitCode=1 Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.812888 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerDied","Data":"77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.813489 4828 scope.go:117] "RemoveContainer" containerID="77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.829452 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.840975 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.860449 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.872123 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887282 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887295 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.887918 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.901362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.921954 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.935644 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.951219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.969125 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989217 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989225 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:16Z","lastTransitionTime":"2025-11-29T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:16 crc kubenswrapper[4828]: I1129 07:02:16.989573 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:16Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.002991 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.013815 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.024100 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.039803 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.051327 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.065842 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.077719 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:17Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.091855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.091885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.091895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.091910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.091922 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.193810 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.193858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.193869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.193897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.193910 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.297811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.297856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.297866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.297886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.297898 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.401106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.401205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.401229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.401262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.401340 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.411386 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.411435 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.411430 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.411397 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:17 crc kubenswrapper[4828]: E1129 07:02:17.411643 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:17 crc kubenswrapper[4828]: E1129 07:02:17.411851 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:17 crc kubenswrapper[4828]: E1129 07:02:17.412007 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:17 crc kubenswrapper[4828]: E1129 07:02:17.412136 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.504721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.504769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.504778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.504793 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.504803 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.609192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.609339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.609362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.609395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.609418 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.712405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.712467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.712478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.712495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.712509 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.814571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.814605 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.814614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.814627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.814635 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.917355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.917421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.917434 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.917476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:17 crc kubenswrapper[4828]: I1129 07:02:17.917490 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:17Z","lastTransitionTime":"2025-11-29T07:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.020681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.021127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.021562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.021662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.021737 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.124108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.124147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.124160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.124176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.124188 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.227885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.227948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.227966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.227994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.228017 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.331972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.332041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.332062 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.332091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.332115 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.436020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.436080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.436089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.436105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.436115 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.538198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.538245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.538260 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.538305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.538321 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.641944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.641995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.642007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.642023 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.642034 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.745441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.745564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.745588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.745625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.745648 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.823029 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/0.log" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.823101 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerStarted","Data":"81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.837726 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.848223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.848260 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.848299 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.848316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.848327 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.852749 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.885448 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e99710295698c6b6771a4d51f24037cfe943836db3be5e3a39e8d64ce25e3345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:01:44Z\\\",\\\"message\\\":\\\"GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.288983 6246 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:01:44.290288 6246 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1129 07:01:44.290442 6246 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1129 07:01:44.290477 6246 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:01:44.290658 6246 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:01:44.290752 6246 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.901147 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.915146 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.938972 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.950618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.950670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.950681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.950698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.950710 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:18Z","lastTransitionTime":"2025-11-29T07:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.951936 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:18 crc kubenswrapper[4828]: I1129 07:02:18.970966 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.014894 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.033074 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.044081 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.052969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.053002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.053011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.053026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.053038 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.057212 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.066619 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.081599 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.094107 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.107509 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.118330 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.130102 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.154973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.155017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.155026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.155040 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.155058 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.257674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.257733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.257749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.257859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.257879 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.361413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.361462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.361471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.361487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.361496 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.411323 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.411514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.411552 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.411771 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.411848 4828 scope.go:117] "RemoveContainer" containerID="9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57" Nov 29 07:02:19 crc kubenswrapper[4828]: E1129 07:02:19.411935 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:19 crc kubenswrapper[4828]: E1129 07:02:19.411841 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:19 crc kubenswrapper[4828]: E1129 07:02:19.412033 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:19 crc kubenswrapper[4828]: E1129 07:02:19.412045 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:02:19 crc kubenswrapper[4828]: E1129 07:02:19.412098 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.432106 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.451132 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.463971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.464009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.464020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.464035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.464047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.475061 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.493938 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.506599 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.525938 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.539152 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.550994 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.562493 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.566035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.566071 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.566080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.566094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.566105 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.573700 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.586987 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.601697 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.612643 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.627818 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.640173 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.651943 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.661808 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.668790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.668839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.668851 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.668869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.668882 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.672615 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.771573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.771617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.771625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.771642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.771654 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.874620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.874686 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.874698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.874719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.874732 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.977086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.977129 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.977140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.977155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:19 crc kubenswrapper[4828]: I1129 07:02:19.977168 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:19Z","lastTransitionTime":"2025-11-29T07:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.079350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.079392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.079401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.079415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.079424 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.182378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.182419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.182428 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.182442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.182452 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.284459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.284496 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.284507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.284523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.284535 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.387071 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.387136 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.387155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.387182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.387204 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.489991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.490049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.490070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.490092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.490103 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.592769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.592830 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.592850 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.592874 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.592891 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.695569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.695622 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.695638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.695657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.695673 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.798081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.798133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.798145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.798165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.798177 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.900729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.900760 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.900768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.900781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:20 crc kubenswrapper[4828]: I1129 07:02:20.900790 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:20Z","lastTransitionTime":"2025-11-29T07:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.003735 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.003798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.003814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.003834 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.003847 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.106223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.106258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.106281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.106294 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.106304 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.209048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.209082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.209090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.209103 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.209113 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.311372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.311418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.311432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.311450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.311463 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.411531 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.411571 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.411730 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:21 crc kubenswrapper[4828]: E1129 07:02:21.412130 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.412325 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:21 crc kubenswrapper[4828]: E1129 07:02:21.412531 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:21 crc kubenswrapper[4828]: E1129 07:02:21.412704 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:21 crc kubenswrapper[4828]: E1129 07:02:21.412926 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.414017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.414062 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.414076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.414539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.414559 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.430317 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.447231 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.483067 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.496309 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.511549 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.516745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.516912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.517007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.517082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.517145 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.525433 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.544068 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.555325 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.569994 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.584440 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.599519 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.614144 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.619516 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.619540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.619548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.619574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.619584 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.625032 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.637747 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.652867 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.664643 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.677469 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.688530 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.722244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.722311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.722324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.722345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.722358 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.825463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.825519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.825531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.825549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.825565 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.928410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.928461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.928497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.928534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:21 crc kubenswrapper[4828]: I1129 07:02:21.928569 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:21Z","lastTransitionTime":"2025-11-29T07:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.031199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.031293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.031307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.031324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.031340 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.133504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.133544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.133554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.133574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.133610 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.236023 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.236068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.236079 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.236094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.236105 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.338377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.338415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.338424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.338439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.338450 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.441147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.441195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.441209 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.441227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.441239 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.543697 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.543736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.543748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.543764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.543776 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.646722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.646751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.646761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.646774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.646783 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.749630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.749687 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.749703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.749722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.749734 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.851572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.851607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.851614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.851628 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.851637 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.954326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.954384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.954396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.954412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:22 crc kubenswrapper[4828]: I1129 07:02:22.954424 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:22Z","lastTransitionTime":"2025-11-29T07:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.058191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.058238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.058248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.058300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.058310 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.160920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.160990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.161012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.161039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.161060 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.263343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.263390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.263404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.263424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.263438 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.365246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.365302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.365311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.365328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.365338 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.411089 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.411207 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.411368 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.411502 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.411494 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.411882 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.411696 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.412003 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.467653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.467686 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.467694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.467710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.467719 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.570558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.570640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.570684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.570713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.570728 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.644574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.644623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.644642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.644662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.644676 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.663632 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.667572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.667612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.667633 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.667651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.667667 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.678932 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.682353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.682396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.682407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.682423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.682437 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.698734 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.701984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.702024 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.702037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.702055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.702068 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.713930 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.718073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.718116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.718127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.718142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.718152 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.729973 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:23 crc kubenswrapper[4828]: E1129 07:02:23.730083 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.731341 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.731372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.731380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.731391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.731401 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.834761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.834837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.834854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.834876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.834897 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.936995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.937047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.937063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.937081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:23 crc kubenswrapper[4828]: I1129 07:02:23.937095 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:23Z","lastTransitionTime":"2025-11-29T07:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.040540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.040590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.040602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.040620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.040632 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.143564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.143660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.143670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.143687 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.143697 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.246831 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.246858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.246866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.246880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.246890 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.349650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.349703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.349719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.349742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.349759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.452943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.453001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.453017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.453035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.453046 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.555596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.555638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.555647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.555663 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.555675 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.659329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.659371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.659385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.659401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.659413 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.762517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.762566 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.762578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.762594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.762609 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.865941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.865994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.866006 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.866025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.866037 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.968102 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.968171 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.968185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.968203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:24 crc kubenswrapper[4828]: I1129 07:02:24.968216 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:24Z","lastTransitionTime":"2025-11-29T07:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.071025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.071065 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.071075 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.071093 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.071102 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.173996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.174045 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.174060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.174080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.174096 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.276772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.276827 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.276839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.276859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.276872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.379840 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.379885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.379894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.379911 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.379922 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.411795 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.411831 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.411795 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:25 crc kubenswrapper[4828]: E1129 07:02:25.411941 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.411990 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:25 crc kubenswrapper[4828]: E1129 07:02:25.412068 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:25 crc kubenswrapper[4828]: E1129 07:02:25.412003 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:25 crc kubenswrapper[4828]: E1129 07:02:25.412200 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.483586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.483625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.483633 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.483646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.483656 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.585989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.586025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.586034 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.586048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.586057 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.688889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.688935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.688944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.688961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.688971 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.791422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.791461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.791470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.791491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.791501 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.894613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.894698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.894713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.894731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.894743 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.999662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.999692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.999701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.999714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:25 crc kubenswrapper[4828]: I1129 07:02:25.999723 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:25Z","lastTransitionTime":"2025-11-29T07:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.103452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.103501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.103512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.103528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.103545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.206005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.206060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.206072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.206090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.206102 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.309591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.309625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.309635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.309649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.309659 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.412848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.412902 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.412915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.412933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.412945 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.514859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.514903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.514913 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.514928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.514940 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.617162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.617206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.617216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.617233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.617245 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.720066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.720125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.720140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.720159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.720169 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.823054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.823095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.823108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.823125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.823135 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.925547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.925587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.925596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.925608 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:26 crc kubenswrapper[4828]: I1129 07:02:26.925618 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:26Z","lastTransitionTime":"2025-11-29T07:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.030564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.030645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.030656 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.030674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.030685 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.133530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.133562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.133572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.133586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.133599 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.245673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.245713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.245722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.245737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.245750 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.347781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.347807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.347814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.347828 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.347837 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.411134 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:27 crc kubenswrapper[4828]: E1129 07:02:27.411312 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.411134 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.411155 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:27 crc kubenswrapper[4828]: E1129 07:02:27.411406 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.411325 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:27 crc kubenswrapper[4828]: E1129 07:02:27.411477 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:27 crc kubenswrapper[4828]: E1129 07:02:27.411533 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.450183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.450213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.450221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.450235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.450244 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.552868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.552945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.552956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.552972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.552983 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.655698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.655739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.655748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.655763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.655773 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.759101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.759145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.759157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.759177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.759189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.861811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.861851 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.861863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.861879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.861891 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.963915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.964432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.964671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.964930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:27 crc kubenswrapper[4828]: I1129 07:02:27.965234 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:27Z","lastTransitionTime":"2025-11-29T07:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.067630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.067659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.067667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.067681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.067693 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.169987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.170049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.170063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.170084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.170099 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.272584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.273009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.273161 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.273352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.273501 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.375870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.375943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.375960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.375984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.376004 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.478770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.478807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.478818 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.478836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.478848 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.586665 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.586747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.586773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.586805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.586828 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.689878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.689929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.689940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.689957 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.689968 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.792060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.792414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.792427 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.792444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.792457 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.894779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.894843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.894864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.894893 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.894919 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.997947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.998003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.998015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.998038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:28 crc kubenswrapper[4828]: I1129 07:02:28.998054 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:28Z","lastTransitionTime":"2025-11-29T07:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.101025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.101074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.101089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.101107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.101118 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.172467 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.172605 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.172629 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:33.172609613 +0000 UTC m=+152.794685671 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.172673 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.172756 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.172797 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:03:33.172790048 +0000 UTC m=+152.794866106 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.172798 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.172933 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:03:33.17290603 +0000 UTC m=+152.794982108 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.203590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.203657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.203672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.203690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.203701 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.273365 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.273419 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273592 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273622 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273645 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273696 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:03:33.273679868 +0000 UTC m=+152.895755926 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273592 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273760 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273800 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.273836 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:03:33.273824772 +0000 UTC m=+152.895900830 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.306300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.306350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.306361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.306377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.306387 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.408783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.408830 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.408845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.408869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.408881 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.411033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.411083 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.411126 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.411140 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.411091 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.411179 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.411300 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:29 crc kubenswrapper[4828]: E1129 07:02:29.411337 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.511707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.511753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.511762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.511776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.511786 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.614511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.614553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.614567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.614584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.614597 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.716581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.716616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.716627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.716643 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.716653 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.819414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.819470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.819485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.819511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.819522 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.922396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.922433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.922443 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.922456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:29 crc kubenswrapper[4828]: I1129 07:02:29.922466 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:29Z","lastTransitionTime":"2025-11-29T07:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.024492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.024564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.024575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.024590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.024600 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.128180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.128236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.128255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.128310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.128323 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.230739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.230781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.230789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.230803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.230817 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.333312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.333591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.333661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.333744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.333809 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.436328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.436358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.436366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.436379 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.436389 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.538621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.538657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.538667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.538684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.538695 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.641602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.641884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.641948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.642042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.642287 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.745633 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.745678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.745688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.745702 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.745714 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.847837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.847895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.847916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.847937 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.847952 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.950282 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.950337 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.950347 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.950366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:30 crc kubenswrapper[4828]: I1129 07:02:30.950380 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:30Z","lastTransitionTime":"2025-11-29T07:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.053629 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.053676 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.053692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.053716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.053730 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.156387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.156415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.156424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.156436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.156446 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.258523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.258567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.258578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.258594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.258604 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.360843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.360878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.360894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.360910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.360920 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.411559 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:31 crc kubenswrapper[4828]: E1129 07:02:31.411703 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.411559 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.411777 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:31 crc kubenswrapper[4828]: E1129 07:02:31.412494 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:31 crc kubenswrapper[4828]: E1129 07:02:31.412636 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.412867 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:31 crc kubenswrapper[4828]: E1129 07:02:31.412969 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.426041 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.430745 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.444800 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.455370 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.463589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.463638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.463647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.463661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.463670 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.470482 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.484181 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.498063 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.543817 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.556460 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.566249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.566323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.566333 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.566349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.566361 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.572092 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.585896 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.607444 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.630214 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.641254 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.656683 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.668991 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.669029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.669086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.669095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.669112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.669123 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.679853 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.691584 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.701827 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.771802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.771835 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.771843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.771857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.771868 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.873178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.873203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.873211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.873223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.873231 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.976388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.976429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.976440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.976455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:31 crc kubenswrapper[4828]: I1129 07:02:31.976466 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:31Z","lastTransitionTime":"2025-11-29T07:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.079417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.079472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.079483 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.079501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.079515 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.181457 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.181499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.181511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.181526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.181538 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.284226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.284299 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.284311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.284326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.284337 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.386776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.386843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.386852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.386872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.386890 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.489851 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.489956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.489968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.489984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.489996 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.592808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.592848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.592856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.592870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.592879 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.695787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.695834 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.695847 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.695869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.695885 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.798950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.799020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.799029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.799050 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.799072 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.902121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.902176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.902187 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.902204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:32 crc kubenswrapper[4828]: I1129 07:02:32.902221 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:32Z","lastTransitionTime":"2025-11-29T07:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.004611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.004657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.004668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.004683 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.004693 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.109145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.109192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.109204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.109222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.109242 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.212472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.212514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.212526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.212541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.212551 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.314870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.314912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.314922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.314941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.314952 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.411146 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.411188 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.411250 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.411604 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.411723 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.411782 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.411871 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.411976 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.412122 4828 scope.go:117] "RemoveContainer" containerID="9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.418997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.419041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.419055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.419073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.419085 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.521772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.521805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.521814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.521827 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.521840 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.623895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.623939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.623975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.623996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.624010 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.726812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.726858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.726870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.726890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.726953 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.829140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.829197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.829213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.829233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.829249 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.878680 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.878731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.878748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.878768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.878781 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.894144 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.899091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.899133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.899142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.899157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.899170 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.924170 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.931451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.931517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.931538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.931555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.931568 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.944144 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.948380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.948428 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.948448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.948467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.948479 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.962171 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.967526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.967607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.967623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.967642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.967656 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.981948 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:33 crc kubenswrapper[4828]: E1129 07:02:33.982136 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.984232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.984284 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.984293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.984313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:33 crc kubenswrapper[4828]: I1129 07:02:33.984326 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:33Z","lastTransitionTime":"2025-11-29T07:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.086740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.086783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.086795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.086812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.086825 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.189773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.189807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.189816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.189830 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.189839 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.297300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.297389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.297401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.297447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.297476 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.400888 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.400951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.401015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.401038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.401051 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.504903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.504945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.504956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.504971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.504985 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.607916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.607971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.607983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.608001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.608015 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.710435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.710510 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.710546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.710567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.710578 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.812901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.812952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.812964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.812981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.812994 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.876887 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/2.log" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.880524 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.881518 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.896775 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.916200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.916238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.916248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.916296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.916317 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:34Z","lastTransitionTime":"2025-11-29T07:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.927951 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.940414 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.952715 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.965464 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.978453 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:34 crc kubenswrapper[4828]: I1129 07:02:34.988800 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.000674 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.011235 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.018400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.018465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.018476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.018493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.018506 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.040561 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.053917 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.072711 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.085799 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.097577 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"864c2384-f751-4d46-829d-13b149f11a8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e40c01e6a92dd24a54c2b65fa533a70235f3faca620c24dc218d0a658b523141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.115245 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.120588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.120641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.120656 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.120674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.120732 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.127633 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.144741 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.160561 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.182609 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.222757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.222790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.222798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.222811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.222821 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.325339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.325387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.325401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.325419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.325433 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.411239 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.411346 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.411336 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.411305 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:35 crc kubenswrapper[4828]: E1129 07:02:35.411464 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:35 crc kubenswrapper[4828]: E1129 07:02:35.411566 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:35 crc kubenswrapper[4828]: E1129 07:02:35.411640 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:35 crc kubenswrapper[4828]: E1129 07:02:35.411697 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.428151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.428196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.428207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.428222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.428233 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.530653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.530702 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.530714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.530733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.530746 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.633520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.633564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.633575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.633591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.633602 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.735765 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.735809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.735820 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.735832 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.735841 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.838301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.838344 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.838356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.838374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.838386 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.941178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.941214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.941252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.941600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:35 crc kubenswrapper[4828]: I1129 07:02:35.941613 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:35Z","lastTransitionTime":"2025-11-29T07:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.044806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.044857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.044870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.044886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.044899 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.153431 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.153460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.153468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.153499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.153508 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.256244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.256308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.256322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.256341 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.256353 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.358726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.358767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.358777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.358793 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.358805 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.461671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.461709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.461722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.461738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.461752 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.564243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.564314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.564327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.564343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.564356 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.667724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.667778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.667789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.667807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.667818 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.770210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.770261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.770295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.770313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.770325 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.873013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.873070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.873083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.873103 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.873117 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.889023 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/3.log" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.889834 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/2.log" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.893887 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" exitCode=1 Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.893940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.894006 4828 scope.go:117] "RemoveContainer" containerID="9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.894765 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:02:36 crc kubenswrapper[4828]: E1129 07:02:36.895802 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.911468 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"864c2384-f751-4d46-829d-13b149f11a8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e40c01e6a92dd24a54c2b65fa533a70235f3faca620c24dc218d0a658b523141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.926238 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.943955 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.957331 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.973381 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.976138 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.976172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.976181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.976193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.976202 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:36Z","lastTransitionTime":"2025-11-29T07:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:36 crc kubenswrapper[4828]: I1129 07:02:36.986150 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.000760 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.013521 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.028671 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.046864 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.060994 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.079366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.079422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.079433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.079449 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.079461 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.081791 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:36Z\\\",\\\"message\\\":\\\"rotocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1129 07:02:35.544319 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.096369 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.107483 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.127838 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.144410 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.164956 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182786 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.182826 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.195736 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.285584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.285626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.285636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.285651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.285662 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.388725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.388762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.388770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.388786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.388797 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.411296 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.411322 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.411416 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:37 crc kubenswrapper[4828]: E1129 07:02:37.411532 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.411642 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:37 crc kubenswrapper[4828]: E1129 07:02:37.411737 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:37 crc kubenswrapper[4828]: E1129 07:02:37.411786 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:37 crc kubenswrapper[4828]: E1129 07:02:37.411904 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.491412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.491460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.491472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.491488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.491500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.595115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.595194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.595207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.595232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.595249 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.698529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.698583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.698599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.698623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.698637 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.802220 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.802312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.802325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.802345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.802357 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.899530 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/3.log" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.903859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.903893 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.903901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.903914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:37 crc kubenswrapper[4828]: I1129 07:02:37.903923 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:37Z","lastTransitionTime":"2025-11-29T07:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.005836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.005879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.005890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.005908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.005920 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.108736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.109088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.109176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.109292 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.109409 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.211872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.211903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.211911 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.211925 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.211933 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.314374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.314423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.314438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.314453 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.314462 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.417186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.417231 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.417244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.417261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.417288 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.519736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.519781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.519791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.519807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.519818 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.621499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.621527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.621536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.621549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.621558 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.724692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.724747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.724757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.724771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.724781 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.827847 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.827901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.827915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.827936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.827952 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.932252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.932346 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.932358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.932382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:38 crc kubenswrapper[4828]: I1129 07:02:38.932398 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:38Z","lastTransitionTime":"2025-11-29T07:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.034794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.034842 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.034856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.034873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.034885 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.137928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.137984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.137996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.138013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.138024 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.240465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.240517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.240531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.240545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.240556 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.343346 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.343393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.343406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.343425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.343437 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.410809 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.410931 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.410979 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:39 crc kubenswrapper[4828]: E1129 07:02:39.411014 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.410852 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:39 crc kubenswrapper[4828]: E1129 07:02:39.411191 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:39 crc kubenswrapper[4828]: E1129 07:02:39.411619 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:39 crc kubenswrapper[4828]: E1129 07:02:39.411692 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.446669 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.446722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.446733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.446752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.446775 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.549413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.549462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.549471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.549486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.549495 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.652905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.652968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.653025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.653047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.653058 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.755608 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.755678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.755694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.755715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.755728 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.858143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.858184 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.858198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.858216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.858227 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.961292 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.961339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.961355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.961373 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:39 crc kubenswrapper[4828]: I1129 07:02:39.961388 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:39Z","lastTransitionTime":"2025-11-29T07:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.063899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.063935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.063944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.063961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.063973 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.167044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.167121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.167142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.167172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.167194 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.271121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.271164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.271179 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.271198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.271214 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.374265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.374334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.374348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.374367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.374383 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.476905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.476944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.476951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.476966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.476976 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.579094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.579182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.579194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.579210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.579220 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.681395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.681454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.681464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.681483 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.681502 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.783939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.783992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.784005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.784022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.784032 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.887737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.887797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.887812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.887828 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.887839 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.990600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.990862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.991091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.991327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:40 crc kubenswrapper[4828]: I1129 07:02:40.991492 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:40Z","lastTransitionTime":"2025-11-29T07:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.094414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.094899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.095003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.095127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.095192 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.197978 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.198019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.198031 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.198055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.198070 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.300752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.300994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.301054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.301114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.301200 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.404374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.404690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.404797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.404895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.404979 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.411142 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.411195 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.411257 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:41 crc kubenswrapper[4828]: E1129 07:02:41.411318 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.411384 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:41 crc kubenswrapper[4828]: E1129 07:02:41.411406 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:41 crc kubenswrapper[4828]: E1129 07:02:41.411507 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:41 crc kubenswrapper[4828]: E1129 07:02:41.411573 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.426309 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"864c2384-f751-4d46-829d-13b149f11a8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e40c01e6a92dd24a54c2b65fa533a70235f3faca620c24dc218d0a658b523141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.443705 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.458389 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.469595 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.480640 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.494732 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.506942 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.507613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.507647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.507661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.507677 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.507687 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.519515 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.528624 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.543830 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.555947 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.575585 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f869a2a839c3d40153947f66213556f5539a6b2c0a271831384b151d7dcdc57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:05Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410543 6481 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:02:03.410636 6481 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:02:03.411011 6481 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:02:03.411039 6481 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:02:03.411053 6481 factory.go:656] Stopping watch factory\\\\nI1129 07:02:03.411069 6481 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:02:03.411078 6481 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:02:03.454137 6481 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:02:03.454168 6481 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:02:03.454215 6481 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:02:03.454234 6481 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:02:03.454328 6481 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:36Z\\\",\\\"message\\\":\\\"rotocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1129 07:02:35.544319 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.589582 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.604852 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.609955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.610006 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.610020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.610037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.610048 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.617965 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.639255 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.654456 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.670902 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.685071 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.712588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.712638 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.712648 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.712663 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.712674 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.814782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.814825 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.814837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.814851 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.814862 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.916750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.916807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.916820 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.916836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:41 crc kubenswrapper[4828]: I1129 07:02:41.916849 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:41Z","lastTransitionTime":"2025-11-29T07:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.020733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.020794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.020809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.020838 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.020854 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.123513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.123570 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.123582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.123599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.123611 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.226611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.226656 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.226667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.226682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.226693 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.329234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.329311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.329325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.329342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.329353 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.432024 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.432068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.432079 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.432098 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.432112 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.533926 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.533960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.533973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.533988 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.533998 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.637635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.637986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.638000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.638017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.638030 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.740640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.740685 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.740695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.740712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.740721 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.843025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.843329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.843405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.843473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.843542 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.946803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.946861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.946871 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.946885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:42 crc kubenswrapper[4828]: I1129 07:02:42.946895 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:42Z","lastTransitionTime":"2025-11-29T07:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.049958 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.050010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.050028 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.050053 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.050065 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.152938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.152990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.153003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.153019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.153030 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.255463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.255564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.255577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.255593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.255606 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.359886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.359981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.359998 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.360026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.360045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.411449 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.411514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.411516 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.411626 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:43 crc kubenswrapper[4828]: E1129 07:02:43.411616 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:43 crc kubenswrapper[4828]: E1129 07:02:43.411716 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:43 crc kubenswrapper[4828]: E1129 07:02:43.411802 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:43 crc kubenswrapper[4828]: E1129 07:02:43.411864 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.462701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.462762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.462774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.462828 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.462847 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.566326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.566367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.566375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.566390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.566412 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.668736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.668776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.668784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.668798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.668817 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.770731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.771065 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.771149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.771242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.771352 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.874317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.874348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.874363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.874375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.874384 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.977854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.977908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.977922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.977938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:43 crc kubenswrapper[4828]: I1129 07:02:43.977952 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:43Z","lastTransitionTime":"2025-11-29T07:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.076801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.076844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.076857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.076873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.076890 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.093777 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.097720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.097763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.097773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.097790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.097801 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.110589 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.120706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.120927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.120956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.120976 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.120987 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.133964 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.137940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.137982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.137991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.138005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.138013 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.150177 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.154622 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.154658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.154667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.154680 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.154693 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.167123 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1da721f-d6f2-4e3a-b5e9-e25de0b32409\\\",\\\"systemUUID\\\":\\\"0abdd982-eeb9-4e63-b4dc-a9e6bc31d088\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:44 crc kubenswrapper[4828]: E1129 07:02:44.167233 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.168901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.168929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.168940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.168956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.168966 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.272042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.272100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.272112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.272133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.272146 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.375117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.375165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.375175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.375189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.375199 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.478091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.478458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.478550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.478652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.478768 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.580921 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.580959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.580971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.580992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.581003 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.683557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.683589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.683600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.683615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.683627 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.786440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.786492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.786512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.786539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.786557 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.889377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.889445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.889463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.889488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.889506 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.992521 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.992815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.992883 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.992959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:44 crc kubenswrapper[4828]: I1129 07:02:44.993051 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:44Z","lastTransitionTime":"2025-11-29T07:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.095812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.095901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.095912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.095929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.095940 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.198238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.198319 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.198335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.198352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.198369 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.301891 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.301939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.301951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.301967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.301979 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.404514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.404571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.404585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.404604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.404617 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.411752 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.411839 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:45 crc kubenswrapper[4828]: E1129 07:02:45.411905 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.411922 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.412020 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:45 crc kubenswrapper[4828]: E1129 07:02:45.412032 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:45 crc kubenswrapper[4828]: E1129 07:02:45.412089 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:45 crc kubenswrapper[4828]: E1129 07:02:45.412364 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.507307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.507335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.507344 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.507357 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.507368 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.609382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.609411 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.609419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.609432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.609442 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.711862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.711914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.711926 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.711945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.711960 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.815820 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.815903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.815919 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.815947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.815966 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.919035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.919086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.919097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.919118 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:45 crc kubenswrapper[4828]: I1129 07:02:45.919138 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:45Z","lastTransitionTime":"2025-11-29T07:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.022745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.022779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.022788 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.022800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.022809 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.125955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.126003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.126015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.126030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.126041 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.228941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.229466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.229550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.229585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.229609 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.262867 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:46 crc kubenswrapper[4828]: E1129 07:02:46.263041 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:02:46 crc kubenswrapper[4828]: E1129 07:02:46.263146 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs podName:f6581e2a-a98c-493d-8c8f-20c5b4c4b17c nodeName:}" failed. No retries permitted until 2025-11-29 07:03:50.263106029 +0000 UTC m=+169.885182087 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs") pod "network-metrics-daemon-4ffn6" (UID: "f6581e2a-a98c-493d-8c8f-20c5b4c4b17c") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.332136 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.332187 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.332201 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.332218 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.332230 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.435821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.435903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.435916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.435931 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.435941 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.538921 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.539009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.539028 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.539056 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.539074 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.643346 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.643433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.643458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.643489 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.643512 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.746821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.746889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.746901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.746930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.746945 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.850985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.851043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.851057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.851081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.851108 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.955240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.955340 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.955358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.955382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:46 crc kubenswrapper[4828]: I1129 07:02:46.955402 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:46Z","lastTransitionTime":"2025-11-29T07:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.057970 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.058006 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.058017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.058030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.058042 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.162013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.162063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.162073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.162088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.162098 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.264878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.264930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.264942 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.264958 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.264969 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.367821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.367854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.367863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.367876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.367885 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.411569 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.411620 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.411729 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.411746 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:47 crc kubenswrapper[4828]: E1129 07:02:47.411767 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:47 crc kubenswrapper[4828]: E1129 07:02:47.411914 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:47 crc kubenswrapper[4828]: E1129 07:02:47.412023 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:47 crc kubenswrapper[4828]: E1129 07:02:47.412253 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.413010 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:02:47 crc kubenswrapper[4828]: E1129 07:02:47.413199 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.435070 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5525f872-6ecc-4751-a763-80d4b5db61ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00f455a956903ab2e3063a40e48eadbdce4838d8ea96bd9fa32d6398979ab32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05a21feea4b2db67d54eb25c4a98126e57e2442d5280d9080591940923ddbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6455fc49d6bedf9b582c4c2f742aaee4a4f796d4ed08e87518273a7fa87c9c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f10ada6d1c9ae68a9c9c87eb408c35f832f5126e7ad273e1d300ddaa63f50f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24993e0a4b42423cfaf36449ff6e9a172ba82ee50fcace2a1b436b82a9fef0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6aa2fee743c8e9f80502d26707a98093785d6125547771988f3d16f15bc555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41ec044723165d9c079bee4bfc06d139df426c51b0d4888ea98584483dac88e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7dfe426ebf304a44bc234b96fb064bef5aff5a1808eae1f827a2eaaf27c064\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.445901 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87ab1-f8c3-4d1e-9bcf-b3e3bbcb34d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe8bc368267e0afc4945846995bc44f719b38d52c469fb9c11366ad6ac5f185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210a8b6f3a1cc8705eb905de8c8f2bf7a50c8863b8a4807626e1c35693129ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9ae8ff464d1268f98615506c779bf4a5d900ea4a4cdd8d8d8a417fd4de8ea4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ce231b91d868e7ff6a2e0266ec17ea3c3876ae4458298261c20e77462a2b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.458299 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://480daac05f7e1494bb257f6530a6662f55093535618771ec8634f50aba514856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.470873 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.471140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.471160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.471173 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.471190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.471201 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.482675 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f5076831b5046f7766a82a62869c2d8b14cefaaac727f7ffbbfb7d2fbfb160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.498533 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qfj9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a37050-181c-42b4-acf9-dc458a0f5bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:16Z\\\",\\\"message\\\":\\\"2025-11-29T07:01:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad\\\\n2025-11-29T07:01:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a46c0c94-57d2-4775-850e-e78a6acf9aad to /host/opt/cni/bin/\\\\n2025-11-29T07:01:31Z [verbose] multus-daemon started\\\\n2025-11-29T07:01:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:02:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvb9x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qfj9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.511109 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26zg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b3bb3f6-5c62-4db9-a1d3-0fd476518332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee900ad347a894944cb7a496c4e8a5c78ce0569e75fe6a6832fd328c2844efb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smgvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26zg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.520038 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"864c2384-f751-4d46-829d-13b149f11a8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e40c01e6a92dd24a54c2b65fa533a70235f3faca620c24dc218d0a658b523141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1744b077d82a17551442cb53acdd49a5d3dca6ce86e7028788b4f575061e493f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.532332 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.550645 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5836996-65fd-4b24-b757-269259483919\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5602b5760379f638cdca17ca8c8abd5cc6b719097b59a30a9204e15bb35d3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a1f42b91bbf247cb0a7d681aa8a849c750b6ede38bde61fc2d43c9b71caf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2ead9fbf16bedb0963f1bfebe112c27eb163928629efc4bf5ec5a245ed3507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://678eb63d28ebe95f3000ed702bc5808cc67406e007f9a1ffd0d0970007f637ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd99f7b71f41bf8c13cb2c91fe67353731a93be2ce82023a41b8d7e5cd82d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf2644758c00f232f02db0317aeaa80f20be5ccf9a1fc7d90533d35dc78b581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c2c3c006e2ad54e8ad8866852dc7f45ad5f8e5bdf0a21625676f7e0d90afcb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6v5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ghlnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.562975 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4ffn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.574165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.574211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.574223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.574329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.574350 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.615837 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"le observer\\\\nW1129 07:01:25.978364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1129 07:01:25.978566 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:01:25.992426 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1723935970/tls.crt::/tmp/serving-cert-1723935970/tls.key\\\\\\\"\\\\nI1129 07:01:26.582046 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:01:26.585353 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:01:26.585373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:01:26.585409 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:01:26.585416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:01:26.591631 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:01:26.591661 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591668 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:01:26.591672 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:01:26.591676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:01:26.591679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:01:26.591682 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:01:26.591876 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:01:26.594262 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.630811 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4bd0bc-2360-4ab8-b977-3e10be35bdab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0700680297cb29561e07e1a840dadf53e77c9ef09efdb00aeca7eff1ae5af40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6708d35965aa49b82e9e05203e3157593985708c55b29acdb58121e10924cdac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22dee083c0e976d328fa7722a5dceaa70ed590514c47a491625b121b387d46a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.644956 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feae56848fb642a68635f7806057398dce2c1e7aac79914a8535157a0fd9c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://633d20869e7c7a30d5d55c84a41611eb5cbfe72bb906079af266761a147a5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.659958 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6388d13-a6fa-4313-b6ee-7ac3e47bc893\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484cf01224050bcbf139810fbbf4a20be41b21e0e562e0f5ef7111ff12c27ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qz9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.674153 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959bd1c3-fd44-4090-996b-6539586c31ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72d050172126a419ff83915c097e8f8471bdbabc1bfdb7810e839cc120c85464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72b4b70093ad86802930bf6fb54967486b5948129a1f65c4530667c1959b8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99lc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cv4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.676955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.676997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.677012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.677033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.677045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.688601 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.702858 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce72f1df-15a3-475b-918b-9076a0d9c29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f616c04e90e73c5a1abfdc1201d8fd3f51f389f8ba3b4aa59b37ed1d20e61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w652b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dgclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.720419 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c273b031-d4b1-480a-9dd1-e26ed759c8a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:02:36Z\\\",\\\"message\\\":\\\"rotocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1129 07:02:35.544319 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:35Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:02:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rk2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-49f6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:02:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.780811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.780868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.780879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.780901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.780915 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.883620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.883653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.883661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.883674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.883682 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.987398 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.987470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.987489 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.987519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:47 crc kubenswrapper[4828]: I1129 07:02:47.987545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:47Z","lastTransitionTime":"2025-11-29T07:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.090374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.090423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.090440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.090465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.090488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.194744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.194810 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.194821 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.194846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.194866 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.299474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.299538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.299557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.299677 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.299691 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.401961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.402011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.402022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.402036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.402048 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.504912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.504969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.504979 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.504996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.505008 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.608232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.608367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.608384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.608404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.608417 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.712048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.712126 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.712146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.712180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.712206 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.815414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.815469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.815481 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.815505 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.815545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.918795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.918857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.918873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.918896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:48 crc kubenswrapper[4828]: I1129 07:02:48.918916 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:48Z","lastTransitionTime":"2025-11-29T07:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.021901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.021993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.022006 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.022033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.022047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.125334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.125395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.125405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.125425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.125439 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.228710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.228777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.228795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.228823 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.228838 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.332489 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.332547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.332560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.332578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.332590 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.411790 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.412057 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:49 crc kubenswrapper[4828]: E1129 07:02:49.412232 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.412453 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.412469 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:49 crc kubenswrapper[4828]: E1129 07:02:49.412578 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:49 crc kubenswrapper[4828]: E1129 07:02:49.412710 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:49 crc kubenswrapper[4828]: E1129 07:02:49.412805 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.434897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.434933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.434943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.434956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.434965 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.537331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.537576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.537596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.537621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.537645 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.641588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.641637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.641646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.641664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.641674 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.743716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.743792 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.743803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.743831 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.743846 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.846771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.846826 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.846834 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.846853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.846863 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.949309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.949349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.949358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.949375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:49 crc kubenswrapper[4828]: I1129 07:02:49.949387 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:49Z","lastTransitionTime":"2025-11-29T07:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.052030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.052081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.052095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.052114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.052127 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.154952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.155208 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.155315 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.155404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.155471 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.258384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.258430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.258440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.258454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.258463 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.361075 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.361123 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.361137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.361151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.361163 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.464406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.464447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.464455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.464470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.464480 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.566734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.566776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.566791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.566807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.566818 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.670377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.670432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.670441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.670455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.670464 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.773073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.773130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.773142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.773159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.773173 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.875899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.875953 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.875965 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.875981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.875993 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.978914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.978973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.978985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.979002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:50 crc kubenswrapper[4828]: I1129 07:02:50.979058 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:50Z","lastTransitionTime":"2025-11-29T07:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.082234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.082304 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.082314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.082332 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.082343 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.189107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.189168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.189180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.189197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.189208 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.292321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.292357 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.292367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.292381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.292392 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.395066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.395119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.395128 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.395146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.395156 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.411357 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.411476 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.411588 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.411695 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:51 crc kubenswrapper[4828]: E1129 07:02:51.411701 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:51 crc kubenswrapper[4828]: E1129 07:02:51.411790 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:51 crc kubenswrapper[4828]: E1129 07:02:51.411850 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:51 crc kubenswrapper[4828]: E1129 07:02:51.411924 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.447070 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-26zg8" podStartSLOduration=85.446994855 podStartE2EDuration="1m25.446994855s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.446949344 +0000 UTC m=+111.069025392" watchObservedRunningTime="2025-11-29 07:02:51.446994855 +0000 UTC m=+111.069070913" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.447302 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qfj9g" podStartSLOduration=86.447294332 podStartE2EDuration="1m26.447294332s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.433079298 +0000 UTC m=+111.055155356" watchObservedRunningTime="2025-11-29 07:02:51.447294332 +0000 UTC m=+111.069370390" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.477079 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.477058382 podStartE2EDuration="1m24.477058382s" podCreationTimestamp="2025-11-29 07:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.475694629 +0000 UTC m=+111.097770687" watchObservedRunningTime="2025-11-29 07:02:51.477058382 +0000 UTC m=+111.099134440" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.489713 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.489697248 podStartE2EDuration="59.489697248s" podCreationTimestamp="2025-11-29 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.489400711 +0000 UTC m=+111.111476769" watchObservedRunningTime="2025-11-29 07:02:51.489697248 +0000 UTC m=+111.111773306" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.497561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.497613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.497621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.497635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.497663 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.553927 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.553911191 podStartE2EDuration="20.553911191s" podCreationTimestamp="2025-11-29 07:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.553540882 +0000 UTC m=+111.175616940" watchObservedRunningTime="2025-11-29 07:02:51.553911191 +0000 UTC m=+111.175987249" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.599713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.599760 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.599772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.599787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.599798 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.625902 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ghlnj" podStartSLOduration=86.625885983 podStartE2EDuration="1m26.625885983s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.625857772 +0000 UTC m=+111.247933830" watchObservedRunningTime="2025-11-29 07:02:51.625885983 +0000 UTC m=+111.247962031" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.667845 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.667827177 podStartE2EDuration="1m22.667827177s" podCreationTimestamp="2025-11-29 07:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.667804527 +0000 UTC m=+111.289880595" watchObservedRunningTime="2025-11-29 07:02:51.667827177 +0000 UTC m=+111.289903235" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.668047 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.668041112 podStartE2EDuration="1m25.668041112s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.655153731 +0000 UTC m=+111.277229809" watchObservedRunningTime="2025-11-29 07:02:51.668041112 +0000 UTC m=+111.290117170" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.703718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.703761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.703769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.703783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.703792 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.706835 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p6rzz" podStartSLOduration=86.70681857 podStartE2EDuration="1m26.70681857s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.69026091 +0000 UTC m=+111.312336978" watchObservedRunningTime="2025-11-29 07:02:51.70681857 +0000 UTC m=+111.328894628" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.706943 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cv4sj" podStartSLOduration=85.706940263 podStartE2EDuration="1m25.706940263s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.70636618 +0000 UTC m=+111.328442238" watchObservedRunningTime="2025-11-29 07:02:51.706940263 +0000 UTC m=+111.329016311" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.734050 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podStartSLOduration=86.734028939 podStartE2EDuration="1m26.734028939s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:51.733552857 +0000 UTC m=+111.355628925" watchObservedRunningTime="2025-11-29 07:02:51.734028939 +0000 UTC m=+111.356104997" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.806465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.806514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.806525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.806540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.806551 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.908657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.908721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.908730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.908748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:51 crc kubenswrapper[4828]: I1129 07:02:51.908759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:51Z","lastTransitionTime":"2025-11-29T07:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.011538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.011603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.011616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.011632 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.011643 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.114516 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.114597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.114610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.114630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.114645 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.216936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.216984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.216993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.217008 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.217019 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.319177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.319219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.319228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.319242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.319251 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.422692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.422753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.422767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.422785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.422798 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.525464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.525706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.525722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.525740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.525753 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.628535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.628586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.628598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.628614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.628624 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.731618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.731654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.731661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.731674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.731683 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.834521 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.834566 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.834577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.834593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.834606 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.936978 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.937025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.937034 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.937049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:52 crc kubenswrapper[4828]: I1129 07:02:52.937059 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:52Z","lastTransitionTime":"2025-11-29T07:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.040578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.040659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.040678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.040703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.040720 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.143200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.143233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.143246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.143262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.143299 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.245977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.246012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.246023 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.246038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.246047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.348619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.348700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.348724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.348753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.348774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.411807 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.411901 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:53 crc kubenswrapper[4828]: E1129 07:02:53.411997 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.411813 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.412078 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:53 crc kubenswrapper[4828]: E1129 07:02:53.412235 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:53 crc kubenswrapper[4828]: E1129 07:02:53.412397 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:53 crc kubenswrapper[4828]: E1129 07:02:53.412508 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.451831 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.451867 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.451877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.451894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.451905 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.554190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.554254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.554297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.554323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.554342 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.658348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.658404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.658420 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.658437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.658450 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.761508 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.761554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.761562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.761584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.761593 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.864059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.864114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.864138 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.864157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.864168 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.966517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.966562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.966572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.966589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:53 crc kubenswrapper[4828]: I1129 07:02:53.966600 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:53Z","lastTransitionTime":"2025-11-29T07:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.068289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.068345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.068356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.068375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.068387 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:54Z","lastTransitionTime":"2025-11-29T07:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.170657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.170715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.170734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.170753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.170763 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:54Z","lastTransitionTime":"2025-11-29T07:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.272981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.273036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.273048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.273063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.273073 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:54Z","lastTransitionTime":"2025-11-29T07:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.375558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.375607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.375619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.375635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.375648 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:54Z","lastTransitionTime":"2025-11-29T07:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.419950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.419990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.419998 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.420011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.420022 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:02:54Z","lastTransitionTime":"2025-11-29T07:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.474594 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z"] Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.475177 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.481803 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.482511 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.482762 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.485364 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.592451 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.592506 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacdb55f-e15f-4cfd-a27c-5842daef58fe-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.592564 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.592615 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacdb55f-e15f-4cfd-a27c-5842daef58fe-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.592665 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacdb55f-e15f-4cfd-a27c-5842daef58fe-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693490 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacdb55f-e15f-4cfd-a27c-5842daef58fe-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693596 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacdb55f-e15f-4cfd-a27c-5842daef58fe-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693640 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693645 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693663 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacdb55f-e15f-4cfd-a27c-5842daef58fe-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.693703 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dacdb55f-e15f-4cfd-a27c-5842daef58fe-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.694647 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacdb55f-e15f-4cfd-a27c-5842daef58fe-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.699760 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacdb55f-e15f-4cfd-a27c-5842daef58fe-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.714843 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacdb55f-e15f-4cfd-a27c-5842daef58fe-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bf98z\" (UID: \"dacdb55f-e15f-4cfd-a27c-5842daef58fe\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.792870 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" Nov 29 07:02:54 crc kubenswrapper[4828]: W1129 07:02:54.832165 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddacdb55f_e15f_4cfd_a27c_5842daef58fe.slice/crio-5f6602543524770741556ce3429dd3df596d1feb7e82e20ccf120376d8a06aae WatchSource:0}: Error finding container 5f6602543524770741556ce3429dd3df596d1feb7e82e20ccf120376d8a06aae: Status 404 returned error can't find the container with id 5f6602543524770741556ce3429dd3df596d1feb7e82e20ccf120376d8a06aae Nov 29 07:02:54 crc kubenswrapper[4828]: I1129 07:02:54.965834 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" event={"ID":"dacdb55f-e15f-4cfd-a27c-5842daef58fe","Type":"ContainerStarted","Data":"5f6602543524770741556ce3429dd3df596d1feb7e82e20ccf120376d8a06aae"} Nov 29 07:02:55 crc kubenswrapper[4828]: I1129 07:02:55.411612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:55 crc kubenswrapper[4828]: I1129 07:02:55.411680 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:55 crc kubenswrapper[4828]: I1129 07:02:55.411706 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:55 crc kubenswrapper[4828]: I1129 07:02:55.411773 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:55 crc kubenswrapper[4828]: E1129 07:02:55.411770 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:55 crc kubenswrapper[4828]: E1129 07:02:55.411925 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:55 crc kubenswrapper[4828]: E1129 07:02:55.412121 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:55 crc kubenswrapper[4828]: E1129 07:02:55.412261 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:56 crc kubenswrapper[4828]: I1129 07:02:56.975191 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" event={"ID":"dacdb55f-e15f-4cfd-a27c-5842daef58fe","Type":"ContainerStarted","Data":"2180226de9af61d693ea99e9da184d820d497bc504b223f2ea1378d022919d63"} Nov 29 07:02:56 crc kubenswrapper[4828]: I1129 07:02:56.993237 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bf98z" podStartSLOduration=90.993206177 podStartE2EDuration="1m30.993206177s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:02:56.992449309 +0000 UTC m=+116.614525367" watchObservedRunningTime="2025-11-29 07:02:56.993206177 +0000 UTC m=+116.615282235" Nov 29 07:02:57 crc kubenswrapper[4828]: I1129 07:02:57.411280 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:57 crc kubenswrapper[4828]: I1129 07:02:57.411392 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:57 crc kubenswrapper[4828]: I1129 07:02:57.411285 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:57 crc kubenswrapper[4828]: I1129 07:02:57.411655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:57 crc kubenswrapper[4828]: E1129 07:02:57.411980 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:57 crc kubenswrapper[4828]: E1129 07:02:57.412076 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:57 crc kubenswrapper[4828]: E1129 07:02:57.412198 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:57 crc kubenswrapper[4828]: E1129 07:02:57.412317 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:02:59 crc kubenswrapper[4828]: I1129 07:02:59.410950 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:02:59 crc kubenswrapper[4828]: E1129 07:02:59.411161 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:02:59 crc kubenswrapper[4828]: I1129 07:02:59.411314 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:02:59 crc kubenswrapper[4828]: I1129 07:02:59.411420 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:02:59 crc kubenswrapper[4828]: I1129 07:02:59.411443 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:02:59 crc kubenswrapper[4828]: E1129 07:02:59.411573 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:02:59 crc kubenswrapper[4828]: E1129 07:02:59.411640 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:02:59 crc kubenswrapper[4828]: E1129 07:02:59.411746 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:00 crc kubenswrapper[4828]: I1129 07:03:00.412027 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:03:00 crc kubenswrapper[4828]: E1129 07:03:00.412205 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:03:01 crc kubenswrapper[4828]: E1129 07:03:01.388101 4828 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 29 07:03:01 crc kubenswrapper[4828]: I1129 07:03:01.411171 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:01 crc kubenswrapper[4828]: I1129 07:03:01.411295 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:01 crc kubenswrapper[4828]: I1129 07:03:01.411258 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:01 crc kubenswrapper[4828]: I1129 07:03:01.412315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:01 crc kubenswrapper[4828]: E1129 07:03:01.412308 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:01 crc kubenswrapper[4828]: E1129 07:03:01.412493 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:01 crc kubenswrapper[4828]: E1129 07:03:01.412514 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:01 crc kubenswrapper[4828]: E1129 07:03:01.412587 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:02 crc kubenswrapper[4828]: E1129 07:03:02.358162 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.410866 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.410963 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.410989 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:03 crc kubenswrapper[4828]: E1129 07:03:03.411039 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.411064 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:03 crc kubenswrapper[4828]: E1129 07:03:03.411135 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:03 crc kubenswrapper[4828]: E1129 07:03:03.411264 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:03 crc kubenswrapper[4828]: E1129 07:03:03.411419 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.999437 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/1.log" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.999891 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/0.log" Nov 29 07:03:03 crc kubenswrapper[4828]: I1129 07:03:03.999942 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3a37050-181c-42b4-acf9-dc458a0f5bcf" containerID="81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20" exitCode=1 Nov 29 07:03:04 crc kubenswrapper[4828]: I1129 07:03:03.999989 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerDied","Data":"81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20"} Nov 29 07:03:04 crc kubenswrapper[4828]: I1129 07:03:04.000030 4828 scope.go:117] "RemoveContainer" containerID="77ae8cad7e4b32fca207eff3bd418544dad38da35d110924b69045946787aec8" Nov 29 07:03:04 crc kubenswrapper[4828]: I1129 07:03:04.000488 4828 scope.go:117] "RemoveContainer" containerID="81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20" Nov 29 07:03:04 crc kubenswrapper[4828]: E1129 07:03:04.000703 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qfj9g_openshift-multus(b3a37050-181c-42b4-acf9-dc458a0f5bcf)\"" pod="openshift-multus/multus-qfj9g" podUID="b3a37050-181c-42b4-acf9-dc458a0f5bcf" Nov 29 07:03:05 crc kubenswrapper[4828]: I1129 07:03:05.005387 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/1.log" Nov 29 07:03:05 crc kubenswrapper[4828]: I1129 07:03:05.411477 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:05 crc kubenswrapper[4828]: I1129 07:03:05.411500 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:05 crc kubenswrapper[4828]: I1129 07:03:05.411641 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:05 crc kubenswrapper[4828]: E1129 07:03:05.411868 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:05 crc kubenswrapper[4828]: I1129 07:03:05.412021 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:05 crc kubenswrapper[4828]: E1129 07:03:05.412131 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:05 crc kubenswrapper[4828]: E1129 07:03:05.412375 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:05 crc kubenswrapper[4828]: E1129 07:03:05.412530 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:07 crc kubenswrapper[4828]: E1129 07:03:07.359573 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:07 crc kubenswrapper[4828]: I1129 07:03:07.411639 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:07 crc kubenswrapper[4828]: I1129 07:03:07.411685 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:07 crc kubenswrapper[4828]: I1129 07:03:07.411694 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:07 crc kubenswrapper[4828]: I1129 07:03:07.411930 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:07 crc kubenswrapper[4828]: E1129 07:03:07.412028 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:07 crc kubenswrapper[4828]: E1129 07:03:07.412133 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:07 crc kubenswrapper[4828]: E1129 07:03:07.412512 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:07 crc kubenswrapper[4828]: E1129 07:03:07.412548 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:09 crc kubenswrapper[4828]: I1129 07:03:09.411805 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:09 crc kubenswrapper[4828]: I1129 07:03:09.411848 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:09 crc kubenswrapper[4828]: I1129 07:03:09.411880 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:09 crc kubenswrapper[4828]: E1129 07:03:09.411965 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:09 crc kubenswrapper[4828]: I1129 07:03:09.412033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:09 crc kubenswrapper[4828]: E1129 07:03:09.412115 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:09 crc kubenswrapper[4828]: E1129 07:03:09.412314 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:09 crc kubenswrapper[4828]: E1129 07:03:09.412413 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:11 crc kubenswrapper[4828]: I1129 07:03:11.411514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:11 crc kubenswrapper[4828]: I1129 07:03:11.411562 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:11 crc kubenswrapper[4828]: I1129 07:03:11.411514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:11 crc kubenswrapper[4828]: I1129 07:03:11.411578 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:11 crc kubenswrapper[4828]: E1129 07:03:11.412633 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:11 crc kubenswrapper[4828]: E1129 07:03:11.412687 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:11 crc kubenswrapper[4828]: E1129 07:03:11.412741 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:11 crc kubenswrapper[4828]: E1129 07:03:11.412783 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:11 crc kubenswrapper[4828]: I1129 07:03:11.413389 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:03:11 crc kubenswrapper[4828]: E1129 07:03:11.413564 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-49f6l_openshift-ovn-kubernetes(c273b031-d4b1-480a-9dd1-e26ed759c8a0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" Nov 29 07:03:12 crc kubenswrapper[4828]: E1129 07:03:12.361281 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:13 crc kubenswrapper[4828]: I1129 07:03:13.411501 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:13 crc kubenswrapper[4828]: I1129 07:03:13.411513 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:13 crc kubenswrapper[4828]: E1129 07:03:13.412457 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:13 crc kubenswrapper[4828]: I1129 07:03:13.411633 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:13 crc kubenswrapper[4828]: E1129 07:03:13.412859 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:13 crc kubenswrapper[4828]: I1129 07:03:13.411586 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:13 crc kubenswrapper[4828]: E1129 07:03:13.413105 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:13 crc kubenswrapper[4828]: E1129 07:03:13.412577 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:15 crc kubenswrapper[4828]: I1129 07:03:15.411415 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:15 crc kubenswrapper[4828]: I1129 07:03:15.411541 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:15 crc kubenswrapper[4828]: E1129 07:03:15.411606 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:15 crc kubenswrapper[4828]: I1129 07:03:15.411639 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:15 crc kubenswrapper[4828]: E1129 07:03:15.411768 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:15 crc kubenswrapper[4828]: I1129 07:03:15.411902 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:15 crc kubenswrapper[4828]: E1129 07:03:15.412103 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:15 crc kubenswrapper[4828]: E1129 07:03:15.412351 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:15 crc kubenswrapper[4828]: I1129 07:03:15.412927 4828 scope.go:117] "RemoveContainer" containerID="81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20" Nov 29 07:03:16 crc kubenswrapper[4828]: I1129 07:03:16.041314 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/1.log" Nov 29 07:03:16 crc kubenswrapper[4828]: I1129 07:03:16.041371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerStarted","Data":"0ce01932a55d625ed624dfad578fd1a946c7ae87a5964106d755917f0c7ab53d"} Nov 29 07:03:17 crc kubenswrapper[4828]: E1129 07:03:17.363124 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:17 crc kubenswrapper[4828]: I1129 07:03:17.411732 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:17 crc kubenswrapper[4828]: I1129 07:03:17.411783 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:17 crc kubenswrapper[4828]: E1129 07:03:17.411874 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:17 crc kubenswrapper[4828]: I1129 07:03:17.411732 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:17 crc kubenswrapper[4828]: E1129 07:03:17.412062 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:17 crc kubenswrapper[4828]: E1129 07:03:17.412141 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:17 crc kubenswrapper[4828]: I1129 07:03:17.412305 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:17 crc kubenswrapper[4828]: E1129 07:03:17.412405 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:19 crc kubenswrapper[4828]: I1129 07:03:19.411475 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:19 crc kubenswrapper[4828]: I1129 07:03:19.411578 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:19 crc kubenswrapper[4828]: E1129 07:03:19.411616 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:19 crc kubenswrapper[4828]: I1129 07:03:19.411707 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:19 crc kubenswrapper[4828]: E1129 07:03:19.411836 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:19 crc kubenswrapper[4828]: E1129 07:03:19.411996 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:19 crc kubenswrapper[4828]: I1129 07:03:19.412003 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:19 crc kubenswrapper[4828]: E1129 07:03:19.412354 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:21 crc kubenswrapper[4828]: I1129 07:03:21.411439 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:21 crc kubenswrapper[4828]: I1129 07:03:21.411452 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:21 crc kubenswrapper[4828]: I1129 07:03:21.411574 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:21 crc kubenswrapper[4828]: I1129 07:03:21.412540 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:21 crc kubenswrapper[4828]: E1129 07:03:21.412703 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:21 crc kubenswrapper[4828]: E1129 07:03:21.413301 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:21 crc kubenswrapper[4828]: E1129 07:03:21.413436 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:21 crc kubenswrapper[4828]: E1129 07:03:21.413493 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:22 crc kubenswrapper[4828]: E1129 07:03:22.365016 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:23 crc kubenswrapper[4828]: I1129 07:03:23.412421 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:23 crc kubenswrapper[4828]: I1129 07:03:23.412456 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:23 crc kubenswrapper[4828]: I1129 07:03:23.412453 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:23 crc kubenswrapper[4828]: I1129 07:03:23.412596 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:23 crc kubenswrapper[4828]: E1129 07:03:23.412596 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:23 crc kubenswrapper[4828]: E1129 07:03:23.412701 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:23 crc kubenswrapper[4828]: E1129 07:03:23.412803 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:23 crc kubenswrapper[4828]: E1129 07:03:23.412903 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:25 crc kubenswrapper[4828]: I1129 07:03:25.410832 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:25 crc kubenswrapper[4828]: E1129 07:03:25.410997 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:25 crc kubenswrapper[4828]: I1129 07:03:25.411113 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:25 crc kubenswrapper[4828]: I1129 07:03:25.411133 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:25 crc kubenswrapper[4828]: I1129 07:03:25.411774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:25 crc kubenswrapper[4828]: E1129 07:03:25.412089 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:25 crc kubenswrapper[4828]: E1129 07:03:25.412318 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:25 crc kubenswrapper[4828]: E1129 07:03:25.412357 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:25 crc kubenswrapper[4828]: I1129 07:03:25.412774 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.089991 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/3.log" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.092815 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerStarted","Data":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} Nov 29 07:03:27 crc kubenswrapper[4828]: E1129 07:03:27.366583 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.411284 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:27 crc kubenswrapper[4828]: E1129 07:03:27.411478 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.411520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.411598 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:27 crc kubenswrapper[4828]: E1129 07:03:27.411742 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:27 crc kubenswrapper[4828]: E1129 07:03:27.411999 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:27 crc kubenswrapper[4828]: I1129 07:03:27.412003 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:27 crc kubenswrapper[4828]: E1129 07:03:27.412084 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.098534 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.127229 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podStartSLOduration=124.127200435 podStartE2EDuration="2m4.127200435s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:29.126551548 +0000 UTC m=+148.748627626" watchObservedRunningTime="2025-11-29 07:03:29.127200435 +0000 UTC m=+148.749276493" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.411002 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:29 crc kubenswrapper[4828]: E1129 07:03:29.411157 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.411230 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.411293 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.411339 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:29 crc kubenswrapper[4828]: E1129 07:03:29.411390 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:29 crc kubenswrapper[4828]: E1129 07:03:29.411473 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:29 crc kubenswrapper[4828]: E1129 07:03:29.411537 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:29 crc kubenswrapper[4828]: I1129 07:03:29.535584 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4ffn6"] Nov 29 07:03:30 crc kubenswrapper[4828]: I1129 07:03:30.101655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:30 crc kubenswrapper[4828]: E1129 07:03:30.102520 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4ffn6" podUID="f6581e2a-a98c-493d-8c8f-20c5b4c4b17c" Nov 29 07:03:31 crc kubenswrapper[4828]: I1129 07:03:31.411674 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:31 crc kubenswrapper[4828]: I1129 07:03:31.411800 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:31 crc kubenswrapper[4828]: E1129 07:03:31.411827 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:03:31 crc kubenswrapper[4828]: I1129 07:03:31.412021 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:31 crc kubenswrapper[4828]: E1129 07:03:31.412096 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:03:31 crc kubenswrapper[4828]: E1129 07:03:31.412325 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:03:32 crc kubenswrapper[4828]: I1129 07:03:32.411632 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:32 crc kubenswrapper[4828]: I1129 07:03:32.414416 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:03:32 crc kubenswrapper[4828]: I1129 07:03:32.416063 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.241798 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.242178 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.242320 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.242630 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.242682 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.242814 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:05:35.24273262 +0000 UTC m=+274.864808738 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.242897 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:05:35.242856123 +0000 UTC m=+274.864932331 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.242938 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:05:35.242921395 +0000 UTC m=+274.864997703 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.343448 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.343492 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343660 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343679 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343702 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343758 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:05:35.343743683 +0000 UTC m=+274.965819741 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343797 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343844 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343867 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:03:33 crc kubenswrapper[4828]: E1129 07:03:33.343950 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:05:35.343921568 +0000 UTC m=+274.965997666 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.411579 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.412305 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.412482 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.414917 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.414981 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.415100 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:03:33 crc kubenswrapper[4828]: I1129 07:03:33.416430 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.300842 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.757561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.801622 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.803549 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.803866 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.804043 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdxmg"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.804962 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.805318 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.805608 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.805789 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.806110 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.806702 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.815565 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.817181 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.829851 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.830143 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.832153 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.832616 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.832814 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.832959 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.833642 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.833766 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.833917 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.834964 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835402 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835436 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835486 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835538 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835577 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835663 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.835741 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836059 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836093 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836257 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836498 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836759 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.836937 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.837043 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.837338 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.837453 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.837618 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.837746 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.838106 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.838366 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.839910 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.840029 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.840238 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.840461 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss6dh"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.841492 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.842588 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.842779 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.843482 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.845020 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.848218 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.845782 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.848506 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.848732 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.856525 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7njjk"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.857097 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.859991 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.860716 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.861126 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.861187 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.864475 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.865687 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.867303 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.868515 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-svfss"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.869560 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.870355 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.870423 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mvnk2"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.870841 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.870885 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.871007 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.871148 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.871635 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.871635 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-swjkr"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.872623 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.874104 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.875341 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.875846 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bwcm4"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.876500 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.877451 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.881972 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.884861 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.885870 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.886216 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.886467 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.889304 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.900366 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.900744 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.900941 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.901222 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.901579 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.901861 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.902118 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.902369 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.902555 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.903228 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.903795 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.903992 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.899982 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.907041 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.907318 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.907527 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.907739 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.907955 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.908349 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.908543 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.909250 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.912983 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.913469 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915324 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915628 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915687 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915727 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915787 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915858 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.915634 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916077 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916100 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916135 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916200 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916229 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916287 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916377 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916422 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916439 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916519 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916525 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916615 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916677 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916719 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916894 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916929 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.916940 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.919886 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.919887 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.929611 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7lwfp"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.930043 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.930547 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.932038 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.932076 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.935337 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.935421 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.935446 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.936324 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.937049 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.938398 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-svfss"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.941103 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.941699 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rmxsv"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.942204 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.942522 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.944215 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.944425 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdxmg"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.950149 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.950198 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7njjk"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.950211 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bwcm4"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.952032 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.952196 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.952801 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.953304 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.953618 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.955599 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.956598 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.956826 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.958702 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss6dh"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961166 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c469b\" (UniqueName: \"kubernetes.io/projected/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-kube-api-access-c469b\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961211 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961249 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961287 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t589s\" (UniqueName: \"kubernetes.io/projected/c52a7bb7-0f41-4457-a354-be5d25881767-kube-api-access-t589s\") pod \"downloads-7954f5f757-mvnk2\" (UID: \"c52a7bb7-0f41-4457-a354-be5d25881767\") " pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961303 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit-dir\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961328 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-config\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961343 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-encryption-config\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961366 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-trusted-ca\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961419 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961440 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9dg\" (UniqueName: \"kubernetes.io/projected/aaaf2648-20f0-4174-abc4-990d8d3fa84a-kube-api-access-vn9dg\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961456 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282f664-abb6-4151-83a5-badb4471d931-serving-cert\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961473 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961488 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961507 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961524 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgphq\" (UniqueName: \"kubernetes.io/projected/da7eb258-005b-481a-bd0c-a96731361368-kube-api-access-qgphq\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961542 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961575 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00633d7-0be4-4a78-800b-d5f412366bc6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961591 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961643 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961662 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlk8b\" (UniqueName: \"kubernetes.io/projected/cafa68b0-17e5-4a83-aefd-560d84f521ea-kube-api-access-wlk8b\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961678 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-images\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961695 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961709 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmv2\" (UniqueName: \"kubernetes.io/projected/d464f5b3-e407-4711-9fcf-823eb7ae866d-kube-api-access-spmv2\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961726 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaaf2648-20f0-4174-abc4-990d8d3fa84a-serving-cert\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961745 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-serving-cert\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961765 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-client\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961780 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961795 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b00633d7-0be4-4a78-800b-d5f412366bc6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961819 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-serving-cert\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zdz6\" (UniqueName: \"kubernetes.io/projected/c282f664-abb6-4151-83a5-badb4471d931-kube-api-access-2zdz6\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961857 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-config\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961875 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b4j\" (UniqueName: \"kubernetes.io/projected/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-kube-api-access-g6b4j\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961893 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961910 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961926 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dltjh\" (UniqueName: \"kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961966 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.961982 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-policies\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962000 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962023 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962044 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c282f664-abb6-4151-83a5-badb4471d931-available-featuregates\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962077 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vkl\" (UniqueName: \"kubernetes.io/projected/fceb6344-0e91-4a0c-91bc-88e3415d12c5-kube-api-access-l7vkl\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgpsl\" (UniqueName: \"kubernetes.io/projected/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-kube-api-access-lgpsl\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962126 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962147 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962162 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962179 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkjsf\" (UniqueName: \"kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962203 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-node-pullsecrets\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962219 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962235 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-dir\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962252 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962283 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962299 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-encryption-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962315 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fceb6344-0e91-4a0c-91bc-88e3415d12c5-machine-approver-tls\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962329 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962723 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd977\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-kube-api-access-pd977\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.962647 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.963255 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.968058 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.968062 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvnk2"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.970722 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.971257 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.971617 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972188 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972197 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-client\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972627 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972690 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphlr\" (UniqueName: \"kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972715 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-config\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972752 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972774 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7eb258-005b-481a-bd0c-a96731361368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972797 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2969\" (UniqueName: \"kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972833 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec00b335-adab-4b39-a98e-b68fdb402a27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972859 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972880 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972901 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafa68b0-17e5-4a83-aefd-560d84f521ea-metrics-tls\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972927 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbcl\" (UniqueName: \"kubernetes.io/projected/ec00b335-adab-4b39-a98e-b68fdb402a27-kube-api-access-vzbcl\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972949 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.972980 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973002 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973030 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7eb258-005b-481a-bd0c-a96731361368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973061 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-image-import-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973085 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzc7\" (UniqueName: \"kubernetes.io/projected/13d1f1ec-a922-4d84-93b3-214bff4187c0-kube-api-access-8hzc7\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-auth-proxy-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973127 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973149 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973200 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973226 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-serving-cert\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973251 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973308 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973323 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.973519 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.974246 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.975108 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.978678 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.979918 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.984082 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95w8h"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.984527 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.987486 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.987849 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg"] Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.988029 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:34 crc kubenswrapper[4828]: I1129 07:03:34.999828 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.000475 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.000826 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5xwt7"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.001128 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-b8m9c"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.002812 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.003775 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.003968 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.004235 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.004288 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.005806 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.006657 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.009760 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.009873 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.011297 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.016101 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.016170 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ktplp"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.020481 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.020688 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.022530 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.024425 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.027158 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.030483 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xpv8b"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.036217 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.037571 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.039258 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xt2sv"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.039923 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7lwfp"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.040013 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.040549 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-swjkr"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.043049 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.044789 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.046475 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xpv8b"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.050731 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5xwt7"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.050771 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.050782 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.051574 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.052369 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.053529 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.054540 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.055751 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.056975 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.064662 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.067592 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.068133 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.069034 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ktplp"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.070422 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xt2sv"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.071819 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.072590 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.073956 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzbcl\" (UniqueName: \"kubernetes.io/projected/ec00b335-adab-4b39-a98e-b68fdb402a27-kube-api-access-vzbcl\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074011 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d7569d-1e02-4c21-af59-f692827931a9-trusted-ca\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074041 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074073 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsf79\" (UniqueName: \"kubernetes.io/projected/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-kube-api-access-rsf79\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074101 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-config\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074126 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074148 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7eb258-005b-481a-bd0c-a96731361368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074190 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzc7\" (UniqueName: \"kubernetes.io/projected/13d1f1ec-a922-4d84-93b3-214bff4187c0-kube-api-access-8hzc7\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074211 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-auth-proxy-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074234 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074256 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074298 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074584 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95w8h"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.074632 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x5w66"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075174 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075307 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-auth-proxy-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075537 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075645 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075789 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075824 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075842 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075861 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c469b\" (UniqueName: \"kubernetes.io/projected/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-kube-api-access-c469b\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075878 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075894 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118d01c2-66e7-465e-910e-7a53a3516b56-config\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075924 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075942 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.075982 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076005 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076027 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn9dg\" (UniqueName: \"kubernetes.io/projected/aaaf2648-20f0-4174-abc4-990d8d3fa84a-kube-api-access-vn9dg\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076050 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076072 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-stats-auth\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076094 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgphq\" (UniqueName: \"kubernetes.io/projected/da7eb258-005b-481a-bd0c-a96731361368-kube-api-access-qgphq\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076113 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076118 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076130 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00633d7-0be4-4a78-800b-d5f412366bc6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076156 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076174 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076210 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmv2\" (UniqueName: \"kubernetes.io/projected/d464f5b3-e407-4711-9fcf-823eb7ae866d-kube-api-access-spmv2\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076244 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076288 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-client\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076310 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076336 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076353 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dltjh\" (UniqueName: \"kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076369 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076386 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076419 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7vkl\" (UniqueName: \"kubernetes.io/projected/fceb6344-0e91-4a0c-91bc-88e3415d12c5-kube-api-access-l7vkl\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076437 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgpsl\" (UniqueName: \"kubernetes.io/projected/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-kube-api-access-lgpsl\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076456 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076471 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076487 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076533 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076507 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbc422bf-1668-470a-96a8-d94bbe3a2209-service-ca-bundle\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076585 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076606 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-dir\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076626 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-encryption-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076648 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fceb6344-0e91-4a0c-91bc-88e3415d12c5-machine-approver-tls\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076663 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-client\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076678 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd977\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-kube-api-access-pd977\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076699 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-config\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6px\" (UniqueName: \"kubernetes.io/projected/fbc422bf-1668-470a-96a8-d94bbe3a2209-kube-api-access-8c6px\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec00b335-adab-4b39-a98e-b68fdb402a27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076817 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-tmpfs\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076833 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076852 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-image-import-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076871 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076914 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b2a61f-b080-46c7-a007-6108a359afe7-config\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076931 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-serving-cert\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076946 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-webhook-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076961 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b2a61f-b080-46c7-a007-6108a359afe7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076993 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkp7\" (UniqueName: \"kubernetes.io/projected/eb6b6e45-3101-4755-a294-ad55096f3483-kube-api-access-xkkp7\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077011 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077030 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-serving-cert\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077055 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9w4\" (UniqueName: \"kubernetes.io/projected/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-kube-api-access-px9w4\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077075 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-metrics-certs\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077082 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t589s\" (UniqueName: \"kubernetes.io/projected/c52a7bb7-0f41-4457-a354-be5d25881767-kube-api-access-t589s\") pod \"downloads-7954f5f757-mvnk2\" (UID: \"c52a7bb7-0f41-4457-a354-be5d25881767\") " pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077149 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit-dir\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077167 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077191 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077684 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-config\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077707 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-encryption-config\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077729 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-trusted-ca\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077752 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282f664-abb6-4151-83a5-badb4471d931-serving-cert\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077803 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-service-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077824 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-srv-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077847 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077902 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077092 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x5w66"] Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.078102 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-audit-dir\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.078315 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-image-import-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.078420 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079027 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00633d7-0be4-4a78-800b-d5f412366bc6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.076724 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079113 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.077624 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079422 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079633 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-config\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079696 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-dir\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079716 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.079942 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-config\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080378 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118d01c2-66e7-465e-910e-7a53a3516b56-serving-cert\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080421 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080428 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080447 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlk8b\" (UniqueName: \"kubernetes.io/projected/cafa68b0-17e5-4a83-aefd-560d84f521ea-kube-api-access-wlk8b\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080486 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-serving-cert\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080518 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79d7569d-1e02-4c21-af59-f692827931a9-metrics-tls\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080550 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-images\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080580 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaaf2648-20f0-4174-abc4-990d8d3fa84a-serving-cert\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080621 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b00633d7-0be4-4a78-800b-d5f412366bc6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-client\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080692 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-serving-cert\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zdz6\" (UniqueName: \"kubernetes.io/projected/c282f664-abb6-4151-83a5-badb4471d931-kube-api-access-2zdz6\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080735 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-config\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080751 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6b4j\" (UniqueName: \"kubernetes.io/projected/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-kube-api-access-g6b4j\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080804 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080821 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpkt\" (UniqueName: \"kubernetes.io/projected/975b55fc-fe38-4516-bcef-5af821ad487c-kube-api-access-ztpkt\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6b2a61f-b080-46c7-a007-6108a359afe7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080879 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-policies\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080925 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c282f664-abb6-4151-83a5-badb4471d931-available-featuregates\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080950 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-profile-collector-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080967 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.080992 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkjsf\" (UniqueName: \"kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081008 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-node-pullsecrets\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081042 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4km\" (UniqueName: \"kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081250 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081539 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gvv7\" (UniqueName: \"kubernetes.io/projected/e138e355-8c56-47a6-9008-c8679fad48d5-kube-api-access-8gvv7\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081576 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081611 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081630 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkbh\" (UniqueName: \"kubernetes.io/projected/118d01c2-66e7-465e-910e-7a53a3516b56-kube-api-access-pwkbh\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081647 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx96z\" (UniqueName: \"kubernetes.io/projected/ff651c8d-3ada-4888-990d-6b0edc5595f4-kube-api-access-xx96z\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081686 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphlr\" (UniqueName: \"kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081702 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081712 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081719 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/975b55fc-fe38-4516-bcef-5af821ad487c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081745 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-default-certificate\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081770 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7eb258-005b-481a-bd0c-a96731361368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081792 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2969\" (UniqueName: \"kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081819 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxw2\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-kube-api-access-phxw2\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081841 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafa68b0-17e5-4a83-aefd-560d84f521ea-metrics-tls\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.081899 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.082251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.082343 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-client\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.082659 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-trusted-ca\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.082822 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.083312 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec00b335-adab-4b39-a98e-b68fdb402a27-images\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.083333 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.083470 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.083816 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7eb258-005b-481a-bd0c-a96731361368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.083916 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c282f664-abb6-4151-83a5-badb4471d931-available-featuregates\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084053 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084080 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-encryption-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084096 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-audit-policies\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084245 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13d1f1ec-a922-4d84-93b3-214bff4187c0-node-pullsecrets\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084412 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaaf2648-20f0-4174-abc4-990d8d3fa84a-config\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084555 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceb6344-0e91-4a0c-91bc-88e3415d12c5-config\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.084809 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fceb6344-0e91-4a0c-91bc-88e3415d12c5-machine-approver-tls\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.085030 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d464f5b3-e407-4711-9fcf-823eb7ae866d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.085449 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.085497 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.085848 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086079 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d1f1ec-a922-4d84-93b3-214bff4187c0-config\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086195 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282f664-abb6-4151-83a5-badb4471d931-serving-cert\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086434 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-serving-cert\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086466 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13d1f1ec-a922-4d84-93b3-214bff4187c0-etcd-client\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086775 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.086824 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.087052 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-encryption-config\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.089410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec00b335-adab-4b39-a98e-b68fdb402a27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.089483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cafa68b0-17e5-4a83-aefd-560d84f521ea-metrics-tls\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090306 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090323 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaaf2648-20f0-4174-abc4-990d8d3fa84a-serving-cert\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090375 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090553 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090556 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-serving-cert\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090577 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b00633d7-0be4-4a78-800b-d5f412366bc6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.090945 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7eb258-005b-481a-bd0c-a96731361368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.091031 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.091413 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.091511 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d464f5b3-e407-4711-9fcf-823eb7ae866d-serving-cert\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.093349 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.106664 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.126608 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.147773 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.166696 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4km\" (UniqueName: \"kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182634 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gvv7\" (UniqueName: \"kubernetes.io/projected/e138e355-8c56-47a6-9008-c8679fad48d5-kube-api-access-8gvv7\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182671 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwkbh\" (UniqueName: \"kubernetes.io/projected/118d01c2-66e7-465e-910e-7a53a3516b56-kube-api-access-pwkbh\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182692 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx96z\" (UniqueName: \"kubernetes.io/projected/ff651c8d-3ada-4888-990d-6b0edc5595f4-kube-api-access-xx96z\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182761 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/975b55fc-fe38-4516-bcef-5af821ad487c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182787 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxw2\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-kube-api-access-phxw2\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-default-certificate\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182864 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d7569d-1e02-4c21-af59-f692827931a9-trusted-ca\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182914 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182941 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-config\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.182962 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsf79\" (UniqueName: \"kubernetes.io/projected/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-kube-api-access-rsf79\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183017 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183045 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118d01c2-66e7-465e-910e-7a53a3516b56-config\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183133 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-stats-auth\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183180 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183257 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183339 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183386 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbc422bf-1668-470a-96a8-d94bbe3a2209-service-ca-bundle\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183417 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6px\" (UniqueName: \"kubernetes.io/projected/fbc422bf-1668-470a-96a8-d94bbe3a2209-kube-api-access-8c6px\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183466 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-tmpfs\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183486 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b2a61f-b080-46c7-a007-6108a359afe7-config\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183533 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-serving-cert\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183551 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-webhook-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b2a61f-b080-46c7-a007-6108a359afe7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkp7\" (UniqueName: \"kubernetes.io/projected/eb6b6e45-3101-4755-a294-ad55096f3483-kube-api-access-xkkp7\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9w4\" (UniqueName: \"kubernetes.io/projected/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-kube-api-access-px9w4\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183659 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-metrics-certs\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183706 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-srv-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183724 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-service-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118d01c2-66e7-465e-910e-7a53a3516b56-serving-cert\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79d7569d-1e02-4c21-af59-f692827931a9-metrics-tls\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183806 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-client\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183844 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6b2a61f-b080-46c7-a007-6108a359afe7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183882 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztpkt\" (UniqueName: \"kubernetes.io/projected/975b55fc-fe38-4516-bcef-5af821ad487c-kube-api-access-ztpkt\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.183917 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-profile-collector-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.184146 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-tmpfs\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.184226 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d7569d-1e02-4c21-af59-f692827931a9-trusted-ca\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.187333 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.187350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79d7569d-1e02-4c21-af59-f692827931a9-metrics-tls\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.207339 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.226328 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.233729 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-config\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.246459 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.267417 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.274245 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.287058 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.296325 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-serving-cert\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.307089 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.314957 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-service-ca\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.327207 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.347123 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.357636 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff651c8d-3ada-4888-990d-6b0edc5595f4-etcd-client\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.367754 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.387181 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.407088 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.427229 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.437019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-default-certificate\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.447212 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.456359 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-stats-auth\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.466951 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.476994 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbc422bf-1668-470a-96a8-d94bbe3a2209-metrics-certs\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.487688 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.494340 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbc422bf-1668-470a-96a8-d94bbe3a2209-service-ca-bundle\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.506746 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.527632 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.547479 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.557350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b2a61f-b080-46c7-a007-6108a359afe7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.567186 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.574936 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b2a61f-b080-46c7-a007-6108a359afe7-config\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.607093 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.627913 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.647563 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.667875 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.686943 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.707151 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.728138 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.746799 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.767529 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.786957 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.807455 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.826520 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.846922 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.867891 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.886327 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.907041 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.926737 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.935984 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.936521 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-webhook-cert\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.947833 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.967103 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.978621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-profile-collector-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.985941 4828 request.go:700] Waited for 1.013417423s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Nov 29 07:03:35 crc kubenswrapper[4828]: I1129 07:03:35.987488 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.007311 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.013563 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.016676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e138e355-8c56-47a6-9008-c8679fad48d5-srv-cert\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.027443 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.036619 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/975b55fc-fe38-4516-bcef-5af821ad487c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.047771 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.066164 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.074373 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118d01c2-66e7-465e-910e-7a53a3516b56-config\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.087756 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.106185 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.127901 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.138167 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118d01c2-66e7-465e-910e-7a53a3516b56-serving-cert\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.148421 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.167842 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.174551 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.183370 4828 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.183498 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs podName:eb6b6e45-3101-4755-a294-ad55096f3483 nodeName:}" failed. No retries permitted until 2025-11-29 07:03:36.683457532 +0000 UTC m=+156.305533660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs") pod "multus-admission-controller-857f4d67dd-ktplp" (UID: "eb6b6e45-3101-4755-a294-ad55096f3483") : failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.183719 4828 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.183787 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token podName:45394bd2-1f6a-4f5f-a682-45c6d56fb57b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:36.68376952 +0000 UTC m=+156.305845578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token") pod "machine-config-server-b8m9c" (UID: "45394bd2-1f6a-4f5f-a682-45c6d56fb57b") : failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.184905 4828 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: E1129 07:03:36.185307 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs podName:45394bd2-1f6a-4f5f-a682-45c6d56fb57b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:36.685246958 +0000 UTC m=+156.307323056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs") pod "machine-config-server-b8m9c" (UID: "45394bd2-1f6a-4f5f-a682-45c6d56fb57b") : failed to sync secret cache: timed out waiting for the condition Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.186086 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.206832 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.227084 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.255886 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.266677 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.286005 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.307688 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.327121 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.347816 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.366821 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.387176 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.407071 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.427256 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.447425 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.467392 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.488132 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.507533 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.534651 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.547032 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.567461 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.607538 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.626739 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.647580 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.667567 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.686373 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.706959 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.707287 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.707398 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.707509 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.710740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-certs\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.710870 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-node-bootstrap-token\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.711527 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb6b6e45-3101-4755-a294-ad55096f3483-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.727476 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.746880 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.784747 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.786153 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.806445 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.851860 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzbcl\" (UniqueName: \"kubernetes.io/projected/ec00b335-adab-4b39-a98e-b68fdb402a27-kube-api-access-vzbcl\") pod \"machine-api-operator-5694c8668f-7njjk\" (UID: \"ec00b335-adab-4b39-a98e-b68fdb402a27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.861956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzc7\" (UniqueName: \"kubernetes.io/projected/13d1f1ec-a922-4d84-93b3-214bff4187c0-kube-api-access-8hzc7\") pod \"apiserver-76f77b778f-bdxmg\" (UID: \"13d1f1ec-a922-4d84-93b3-214bff4187c0\") " pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.867352 4828 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.887058 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.906875 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.944550 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c469b\" (UniqueName: \"kubernetes.io/projected/c4a6d09c-fc2c-4c2e-8bb8-241d636981fd-kube-api-access-c469b\") pod \"console-operator-58897d9998-svfss\" (UID: \"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd\") " pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.964087 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn9dg\" (UniqueName: \"kubernetes.io/projected/aaaf2648-20f0-4174-abc4-990d8d3fa84a-kube-api-access-vn9dg\") pod \"authentication-operator-69f744f599-ss6dh\" (UID: \"aaaf2648-20f0-4174-abc4-990d8d3fa84a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.968514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:36 crc kubenswrapper[4828]: I1129 07:03:36.986703 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmv2\" (UniqueName: \"kubernetes.io/projected/d464f5b3-e407-4711-9fcf-823eb7ae866d-kube-api-access-spmv2\") pod \"apiserver-7bbb656c7d-5qvpk\" (UID: \"d464f5b3-e407-4711-9fcf-823eb7ae866d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.003117 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgphq\" (UniqueName: \"kubernetes.io/projected/da7eb258-005b-481a-bd0c-a96731361368-kube-api-access-qgphq\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gfj\" (UID: \"da7eb258-005b-481a-bd0c-a96731361368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.005745 4828 request.go:700] Waited for 1.928537603s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.022886 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t589s\" (UniqueName: \"kubernetes.io/projected/c52a7bb7-0f41-4457-a354-be5d25881767-kube-api-access-t589s\") pod \"downloads-7954f5f757-mvnk2\" (UID: \"c52a7bb7-0f41-4457-a354-be5d25881767\") " pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.043773 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd977\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-kube-api-access-pd977\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.055874 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.063767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b00633d7-0be4-4a78-800b-d5f412366bc6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vjrr4\" (UID: \"b00633d7-0be4-4a78-800b-d5f412366bc6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.069757 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.080875 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.085408 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dltjh\" (UniqueName: \"kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh\") pod \"oauth-openshift-558db77b4-xfq6k\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.089219 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.104100 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlk8b\" (UniqueName: \"kubernetes.io/projected/cafa68b0-17e5-4a83-aefd-560d84f521ea-kube-api-access-wlk8b\") pod \"dns-operator-744455d44c-bwcm4\" (UID: \"cafa68b0-17e5-4a83-aefd-560d84f521ea\") " pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.125162 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.132482 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7vkl\" (UniqueName: \"kubernetes.io/projected/fceb6344-0e91-4a0c-91bc-88e3415d12c5-kube-api-access-l7vkl\") pod \"machine-approver-56656f9798-vt9cs\" (UID: \"fceb6344-0e91-4a0c-91bc-88e3415d12c5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.143246 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.144676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgpsl\" (UniqueName: \"kubernetes.io/projected/580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8-kube-api-access-lgpsl\") pod \"cluster-samples-operator-665b6dd947-rwgkq\" (UID: \"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.144965 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.180521 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.186389 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zdz6\" (UniqueName: \"kubernetes.io/projected/c282f664-abb6-4151-83a5-badb4471d931-kube-api-access-2zdz6\") pod \"openshift-config-operator-7777fb866f-swjkr\" (UID: \"c282f664-abb6-4151-83a5-badb4471d931\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.209325 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6b4j\" (UniqueName: \"kubernetes.io/projected/455fc72a-8bd9-44d9-9e09-ba1d9db0fce8-kube-api-access-g6b4j\") pod \"openshift-apiserver-operator-796bbdcf4f-4fnwr\" (UID: \"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.211021 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphlr\" (UniqueName: \"kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr\") pod \"console-f9d7485db-9vbf7\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.221621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkjsf\" (UniqueName: \"kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf\") pod \"route-controller-manager-6576b87f9c-wkf8d\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.228245 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.243831 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2969\" (UniqueName: \"kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969\") pod \"controller-manager-879f6c89f-nz25w\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.245581 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdxmg"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.250826 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.262841 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gvv7\" (UniqueName: \"kubernetes.io/projected/e138e355-8c56-47a6-9008-c8679fad48d5-kube-api-access-8gvv7\") pod \"catalog-operator-68c6474976-jwfr9\" (UID: \"e138e355-8c56-47a6-9008-c8679fad48d5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.287025 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwkbh\" (UniqueName: \"kubernetes.io/projected/118d01c2-66e7-465e-910e-7a53a3516b56-kube-api-access-pwkbh\") pod \"service-ca-operator-777779d784-95w8h\" (UID: \"118d01c2-66e7-465e-910e-7a53a3516b56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.325352 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxw2\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-kube-api-access-phxw2\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.328409 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx96z\" (UniqueName: \"kubernetes.io/projected/ff651c8d-3ada-4888-990d-6b0edc5595f4-kube-api-access-xx96z\") pod \"etcd-operator-b45778765-7lwfp\" (UID: \"ff651c8d-3ada-4888-990d-6b0edc5595f4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.335579 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.336627 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.351362 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.357041 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsf79\" (UniqueName: \"kubernetes.io/projected/45394bd2-1f6a-4f5f-a682-45c6d56fb57b-kube-api-access-rsf79\") pod \"machine-config-server-b8m9c\" (UID: \"45394bd2-1f6a-4f5f-a682-45c6d56fb57b\") " pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.362457 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79d7569d-1e02-4c21-af59-f692827931a9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hz5qh\" (UID: \"79d7569d-1e02-4c21-af59-f692827931a9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.390878 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4km\" (UniqueName: \"kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km\") pod \"collect-profiles-29406660-wsbtn\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.402865 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.403545 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6px\" (UniqueName: \"kubernetes.io/projected/fbc422bf-1668-470a-96a8-d94bbe3a2209-kube-api-access-8c6px\") pod \"router-default-5444994796-rmxsv\" (UID: \"fbc422bf-1668-470a-96a8-d94bbe3a2209\") " pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.415284 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.423941 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkp7\" (UniqueName: \"kubernetes.io/projected/eb6b6e45-3101-4755-a294-ad55096f3483-kube-api-access-xkkp7\") pod \"multus-admission-controller-857f4d67dd-ktplp\" (UID: \"eb6b6e45-3101-4755-a294-ad55096f3483\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.454394 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.455465 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9w4\" (UniqueName: \"kubernetes.io/projected/c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e-kube-api-access-px9w4\") pod \"packageserver-d55dfcdfc-tlzkw\" (UID: \"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.463976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6b2a61f-b080-46c7-a007-6108a359afe7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f282h\" (UID: \"d6b2a61f-b080-46c7-a007-6108a359afe7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.470520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.484484 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztpkt\" (UniqueName: \"kubernetes.io/projected/975b55fc-fe38-4516-bcef-5af821ad487c-kube-api-access-ztpkt\") pod \"package-server-manager-789f6589d5-4njtf\" (UID: \"975b55fc-fe38-4516-bcef-5af821ad487c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.511600 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.511685 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.511604 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-b8m9c" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.517939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnnz\" (UniqueName: \"kubernetes.io/projected/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-kube-api-access-lfnnz\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518022 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa962b58-6ac1-4c82-86e5-d89b29f40391-proxy-tls\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518078 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjpp\" (UniqueName: \"kubernetes.io/projected/3ac75381-8d8e-408c-806f-59c59ca888df-kube-api-access-wnjpp\") pod \"migrator-59844c95c7-gscbf\" (UID: \"3ac75381-8d8e-408c-806f-59c59ca888df\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518098 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9gs\" (UniqueName: \"kubernetes.io/projected/8a310a8f-1e39-4e6f-8c94-e053124e444d-kube-api-access-nh9gs\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518123 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518167 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-key\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518191 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-srv-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518225 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7e6cb9-6c64-425d-92fe-f067a47489ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518291 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mdd\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518376 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518424 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518450 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzt99\" (UniqueName: \"kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518486 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-auth-proxy-config\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518508 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-proxy-tls\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518528 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-images\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518551 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmblk\" (UniqueName: \"kubernetes.io/projected/6c3fcb52-17ea-44d8-b364-1ca524a05878-kube-api-access-fmblk\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518587 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kbwb\" (UniqueName: \"kubernetes.io/projected/aa962b58-6ac1-4c82-86e5-d89b29f40391-kube-api-access-6kbwb\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518611 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518634 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518671 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518692 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518716 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-cabundle\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518734 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518762 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518847 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518870 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518896 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5tq\" (UniqueName: \"kubernetes.io/projected/9a7e6cb9-6c64-425d-92fe-f067a47489ac-kube-api-access-4d5tq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518919 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3fcb52-17ea-44d8-b364-1ca524a05878-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518953 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnl8\" (UniqueName: \"kubernetes.io/projected/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-kube-api-access-7rnl8\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.518978 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3fcb52-17ea-44d8-b364-1ca524a05878-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.519014 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.522189 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.022159749 +0000 UTC m=+157.644235807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.534475 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.551633 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.551657 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.558899 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.569950 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.571718 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.579142 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.619068 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620191 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620427 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620478 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620513 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-cabundle\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-socket-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620559 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620579 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80057f69-af41-4b81-adf4-b8851e70294f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620628 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620685 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-config-volume\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d5tq\" (UniqueName: \"kubernetes.io/projected/9a7e6cb9-6c64-425d-92fe-f067a47489ac-kube-api-access-4d5tq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620780 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3fcb52-17ea-44d8-b364-1ca524a05878-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620849 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnl8\" (UniqueName: \"kubernetes.io/projected/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-kube-api-access-7rnl8\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620873 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3fcb52-17ea-44d8-b364-1ca524a05878-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620908 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620966 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnnz\" (UniqueName: \"kubernetes.io/projected/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-kube-api-access-lfnnz\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.620988 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-metrics-tls\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621025 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa962b58-6ac1-4c82-86e5-d89b29f40391-proxy-tls\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621058 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7lm\" (UniqueName: \"kubernetes.io/projected/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-kube-api-access-dc7lm\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621079 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5269b919-32d1-403a-b90d-f63894e9be39-cert\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621100 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjpp\" (UniqueName: \"kubernetes.io/projected/3ac75381-8d8e-408c-806f-59c59ca888df-kube-api-access-wnjpp\") pod \"migrator-59844c95c7-gscbf\" (UID: \"3ac75381-8d8e-408c-806f-59c59ca888df\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621144 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9gs\" (UniqueName: \"kubernetes.io/projected/8a310a8f-1e39-4e6f-8c94-e053124e444d-kube-api-access-nh9gs\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621168 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-registration-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621201 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621253 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-key\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621294 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-csi-data-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621316 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-srv-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7e6cb9-6c64-425d-92fe-f067a47489ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621424 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mdd\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621471 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80057f69-af41-4b81-adf4-b8851e70294f-config\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621517 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621546 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621569 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621603 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzt99\" (UniqueName: \"kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621677 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc5k\" (UniqueName: \"kubernetes.io/projected/5269b919-32d1-403a-b90d-f63894e9be39-kube-api-access-qrc5k\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621699 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-plugins-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621724 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-auth-proxy-config\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621759 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-proxy-tls\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621780 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-images\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.621830 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmblk\" (UniqueName: \"kubernetes.io/projected/6c3fcb52-17ea-44d8-b364-1ca524a05878-kube-api-access-fmblk\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.622933 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsvj\" (UniqueName: \"kubernetes.io/projected/6d8629cd-6b91-47d6-be66-cc036042a6e8-kube-api-access-2vsvj\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.623024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kbwb\" (UniqueName: \"kubernetes.io/projected/aa962b58-6ac1-4c82-86e5-d89b29f40391-kube-api-access-6kbwb\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.623099 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.623152 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.623179 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80057f69-af41-4b81-adf4-b8851e70294f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.623236 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-mountpoint-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.626588 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.12655611 +0000 UTC m=+157.748632168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.627394 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.629401 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.630538 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.631793 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-cabundle\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.633247 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.633484 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.634450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3fcb52-17ea-44d8-b364-1ca524a05878-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.634699 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.635350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-images\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.636339 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.636504 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3fcb52-17ea-44d8-b364-1ca524a05878-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.636600 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.637128 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa962b58-6ac1-4c82-86e5-d89b29f40391-auth-proxy-config\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.637872 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7e6cb9-6c64-425d-92fe-f067a47489ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.638366 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa962b58-6ac1-4c82-86e5-d89b29f40391-proxy-tls\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.639724 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-proxy-tls\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.639745 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a310a8f-1e39-4e6f-8c94-e053124e444d-srv-cert\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.640913 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.642467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.642792 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.644077 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-signing-key\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.652763 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.674141 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnl8\" (UniqueName: \"kubernetes.io/projected/ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c-kube-api-access-7rnl8\") pod \"machine-config-controller-84d6567774-bwhr9\" (UID: \"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.686037 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjpp\" (UniqueName: \"kubernetes.io/projected/3ac75381-8d8e-408c-806f-59c59ca888df-kube-api-access-wnjpp\") pod \"migrator-59844c95c7-gscbf\" (UID: \"3ac75381-8d8e-408c-806f-59c59ca888df\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" Nov 29 07:03:37 crc kubenswrapper[4828]: W1129 07:03:37.686195 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda7eb258_005b_481a_bd0c_a96731361368.slice/crio-f62359ca02d1d400eb441083218b2faafa9be63ab2cf8d75e5f1fbbc610a3372 WatchSource:0}: Error finding container f62359ca02d1d400eb441083218b2faafa9be63ab2cf8d75e5f1fbbc610a3372: Status 404 returned error can't find the container with id f62359ca02d1d400eb441083218b2faafa9be63ab2cf8d75e5f1fbbc610a3372 Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.712352 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d5tq\" (UniqueName: \"kubernetes.io/projected/9a7e6cb9-6c64-425d-92fe-f067a47489ac-kube-api-access-4d5tq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s2ds9\" (UID: \"9a7e6cb9-6c64-425d-92fe-f067a47489ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.731916 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80057f69-af41-4b81-adf4-b8851e70294f-config\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.731974 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrc5k\" (UniqueName: \"kubernetes.io/projected/5269b919-32d1-403a-b90d-f63894e9be39-kube-api-access-qrc5k\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.731997 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-plugins-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsvj\" (UniqueName: \"kubernetes.io/projected/6d8629cd-6b91-47d6-be66-cc036042a6e8-kube-api-access-2vsvj\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732106 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80057f69-af41-4b81-adf4-b8851e70294f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732135 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-mountpoint-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732229 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-socket-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732287 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80057f69-af41-4b81-adf4-b8851e70294f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732318 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732371 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-config-volume\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732458 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-metrics-tls\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732504 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7lm\" (UniqueName: \"kubernetes.io/projected/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-kube-api-access-dc7lm\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732539 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5269b919-32d1-403a-b90d-f63894e9be39-cert\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732605 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-registration-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.732659 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-csi-data-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.733143 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80057f69-af41-4b81-adf4-b8851e70294f-config\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.733161 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-plugins-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.733465 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-svfss"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.733653 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-socket-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.733790 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-config-volume\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.734035 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.234021381 +0000 UTC m=+157.856097529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.734091 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-mountpoint-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.734688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-registration-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.735044 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6d8629cd-6b91-47d6-be66-cc036042a6e8-csi-data-dir\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.742626 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80057f69-af41-4b81-adf4-b8851e70294f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.756222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9gs\" (UniqueName: \"kubernetes.io/projected/8a310a8f-1e39-4e6f-8c94-e053124e444d-kube-api-access-nh9gs\") pod \"olm-operator-6b444d44fb-xfngx\" (UID: \"8a310a8f-1e39-4e6f-8c94-e053124e444d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.769467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-metrics-tls\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.769913 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5269b919-32d1-403a-b90d-f63894e9be39-cert\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.769957 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kbwb\" (UniqueName: \"kubernetes.io/projected/aa962b58-6ac1-4c82-86e5-d89b29f40391-kube-api-access-6kbwb\") pod \"machine-config-operator-74547568cd-57mhg\" (UID: \"aa962b58-6ac1-4c82-86e5-d89b29f40391\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.779606 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnnz\" (UniqueName: \"kubernetes.io/projected/d02f8ae1-0dd2-41de-852e-1bd55a992cf1-kube-api-access-lfnnz\") pod \"service-ca-9c57cc56f-5xwt7\" (UID: \"d02f8ae1-0dd2-41de-852e-1bd55a992cf1\") " pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.792215 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss6dh"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.802430 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d42c676c-5d0d-41e6-a7d9-51ec413d3b45-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fkg5\" (UID: \"d42c676c-5d0d-41e6-a7d9-51ec413d3b45\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.808312 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7njjk"] Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.810277 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.810436 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.810579 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.810727 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.833639 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.833866 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.333828573 +0000 UTC m=+157.955904631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.834068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.834464 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.334448519 +0000 UTC m=+157.956524657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.843331 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mdd\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.862763 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzt99\" (UniqueName: \"kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99\") pod \"marketplace-operator-79b997595-hmxx8\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.884926 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.888103 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.895596 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmblk\" (UniqueName: \"kubernetes.io/projected/6c3fcb52-17ea-44d8-b364-1ca524a05878-kube-api-access-fmblk\") pod \"kube-storage-version-migrator-operator-b67b599dd-447zw\" (UID: \"6c3fcb52-17ea-44d8-b364-1ca524a05878\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.902375 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.912565 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.918173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrc5k\" (UniqueName: \"kubernetes.io/projected/5269b919-32d1-403a-b90d-f63894e9be39-kube-api-access-qrc5k\") pod \"ingress-canary-xt2sv\" (UID: \"5269b919-32d1-403a-b90d-f63894e9be39\") " pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.929015 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.935659 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:37 crc kubenswrapper[4828]: E1129 07:03:37.936232 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.436211892 +0000 UTC m=+158.058287960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.936904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsvj\" (UniqueName: \"kubernetes.io/projected/6d8629cd-6b91-47d6-be66-cc036042a6e8-kube-api-access-2vsvj\") pod \"csi-hostpathplugin-x5w66\" (UID: \"6d8629cd-6b91-47d6-be66-cc036042a6e8\") " pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.952810 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80057f69-af41-4b81-adf4-b8851e70294f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9qj8\" (UID: \"80057f69-af41-4b81-adf4-b8851e70294f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.958155 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.975609 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7lm\" (UniqueName: \"kubernetes.io/projected/2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c-kube-api-access-dc7lm\") pod \"dns-default-xpv8b\" (UID: \"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c\") " pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:37 crc kubenswrapper[4828]: I1129 07:03:37.992343 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvnk2"] Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.005172 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bwcm4"] Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.038303 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.038775 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.538758436 +0000 UTC m=+158.160834494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.041646 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.042990 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4"] Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.111474 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.112631 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xt2sv" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.113033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.113312 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.139472 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.139668 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.639640266 +0000 UTC m=+158.261716324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.140355 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.140871 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.640847777 +0000 UTC m=+158.262923895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.162481 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" event={"ID":"aaaf2648-20f0-4174-abc4-990d8d3fa84a","Type":"ContainerStarted","Data":"5564464cb9f916e85f9d2a2587870ac69e3c038942f3ce9e394e0f834ae9245d"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.169434 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" event={"ID":"fceb6344-0e91-4a0c-91bc-88e3415d12c5","Type":"ContainerStarted","Data":"47118d0e23ac2b481e6a37319b3db416c05c3b63dbe34ea3dc2c1ae5b843c558"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.169519 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" event={"ID":"fceb6344-0e91-4a0c-91bc-88e3415d12c5","Type":"ContainerStarted","Data":"8422f2f77b86398b43ed9521649d4f36cf5efaf51c898650c9a55427057daef9"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.171777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-svfss" event={"ID":"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd","Type":"ContainerStarted","Data":"5471b54dfcdf91b9be8abd6d4a7f7a98ddddfa7e402aeb8bdb4695047a5e49cb"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.180411 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" event={"ID":"da7eb258-005b-481a-bd0c-a96731361368","Type":"ContainerStarted","Data":"ed230eaad42e083bd85c36588d11ac0dcd61eeceaedb6f074f74091c1b7c20b7"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.180458 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" event={"ID":"da7eb258-005b-481a-bd0c-a96731361368","Type":"ContainerStarted","Data":"f62359ca02d1d400eb441083218b2faafa9be63ab2cf8d75e5f1fbbc610a3372"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.182648 4828 generic.go:334] "Generic (PLEG): container finished" podID="13d1f1ec-a922-4d84-93b3-214bff4187c0" containerID="0fae65fd8fcbd922636b955c1474920bf493cc2b1fd3ab3a6033d6d762cbb5a5" exitCode=0 Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.182748 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" event={"ID":"13d1f1ec-a922-4d84-93b3-214bff4187c0","Type":"ContainerDied","Data":"0fae65fd8fcbd922636b955c1474920bf493cc2b1fd3ab3a6033d6d762cbb5a5"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.182783 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" event={"ID":"13d1f1ec-a922-4d84-93b3-214bff4187c0","Type":"ContainerStarted","Data":"15c9a8aede1566e6bf0b954650eb0e9ea7d0a781a349ec007f6115214614a39d"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.189145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" event={"ID":"d464f5b3-e407-4711-9fcf-823eb7ae866d","Type":"ContainerStarted","Data":"bf3cbd32c6599867ec83e8b870282e7a3eeb1389ad99ef6bfd1bd07305e5e140"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.195451 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-b8m9c" event={"ID":"45394bd2-1f6a-4f5f-a682-45c6d56fb57b","Type":"ContainerStarted","Data":"cb726958a2e6af03d0c2d8266b718d7d49cfcc3e32f0b9a8a5cd96eeb5883937"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.195506 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-b8m9c" event={"ID":"45394bd2-1f6a-4f5f-a682-45c6d56fb57b","Type":"ContainerStarted","Data":"1ad8303868ff90a3f6e46344504048507218a2f82631b62926040c8a8f0dc183"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.200148 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" event={"ID":"ec00b335-adab-4b39-a98e-b68fdb402a27","Type":"ContainerStarted","Data":"d0cc2d9d3326de5118bbabacf993f04fe705cfe7109a1912efe86a29da708d50"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.201051 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmxsv" event={"ID":"fbc422bf-1668-470a-96a8-d94bbe3a2209","Type":"ContainerStarted","Data":"e651d88eac3027fb564cafdf0625d830008c23179f7407f8de11bf166bfc4f45"} Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.247749 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.248759 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.748469491 +0000 UTC m=+158.370545559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.352498 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.354643 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.854628958 +0000 UTC m=+158.476705016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.453927 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.454168 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.954149133 +0000 UTC m=+158.576225191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.454494 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.454879 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:38.954867672 +0000 UTC m=+158.576943730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.539578 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.556538 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.556678 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.056658626 +0000 UTC m=+158.678734684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.556927 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.557293 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.057281702 +0000 UTC m=+158.679357760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.596490 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gfj" podStartSLOduration=132.596449061 podStartE2EDuration="2m12.596449061s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:38.595597749 +0000 UTC m=+158.217673807" watchObservedRunningTime="2025-11-29 07:03:38.596449061 +0000 UTC m=+158.218525119" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.658202 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.658401 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.158370488 +0000 UTC m=+158.780446546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.658761 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.659938 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.159926018 +0000 UTC m=+158.782002076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.759061 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.759571 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.259549306 +0000 UTC m=+158.881625364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.760174 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.260165152 +0000 UTC m=+158.882241210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.759774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.864795 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.865319 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.365298832 +0000 UTC m=+158.987374890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: E1129 07:03:38.966683 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.466662095 +0000 UTC m=+159.088738153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:38 crc kubenswrapper[4828]: I1129 07:03:38.966172 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.068777 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.069326 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.56930492 +0000 UTC m=+159.191380988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.069642 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.070141 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.570130402 +0000 UTC m=+159.192206460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.173305 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.173726 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.673704032 +0000 UTC m=+159.295780090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.241343 4828 generic.go:334] "Generic (PLEG): container finished" podID="d464f5b3-e407-4711-9fcf-823eb7ae866d" containerID="2cc386dc8dd13f5e8760195197736c1227e82761e672457228b4f05ffea3b7eb" exitCode=0 Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.241857 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" event={"ID":"d464f5b3-e407-4711-9fcf-823eb7ae866d","Type":"ContainerDied","Data":"2cc386dc8dd13f5e8760195197736c1227e82761e672457228b4f05ffea3b7eb"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.258225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" event={"ID":"aaaf2648-20f0-4174-abc4-990d8d3fa84a","Type":"ContainerStarted","Data":"ddc3ad390c3a84851d666be864e001a9baf0fd27b45b5fd2760753abe848e107"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.269754 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" event={"ID":"ec00b335-adab-4b39-a98e-b68fdb402a27","Type":"ContainerStarted","Data":"9b6a3271f83622057f29eb06ff644067cdc83ed9e5c91f0b3dc7fb7d42733154"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.273455 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" event={"ID":"03f7edb8-ded1-483c-81d1-d75417a3dbdc","Type":"ContainerStarted","Data":"ff99c3ae1bb4ca773018b0ad5272e03bab4e0ad94227af2c28a272a2bea3bdd9"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.275117 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.276795 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.776782999 +0000 UTC m=+159.398859057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.284252 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-b8m9c" podStartSLOduration=5.284222851 podStartE2EDuration="5.284222851s" podCreationTimestamp="2025-11-29 07:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.278076542 +0000 UTC m=+158.900152610" watchObservedRunningTime="2025-11-29 07:03:39.284222851 +0000 UTC m=+158.906298909" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.288938 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" event={"ID":"fceb6344-0e91-4a0c-91bc-88e3415d12c5","Type":"ContainerStarted","Data":"075f0190fef482553a86f414bd10028e89a610ab201ee49031978201f5deab46"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.292116 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" event={"ID":"681c42c0-27a5-4f76-a992-1855f9fa4be1","Type":"ContainerStarted","Data":"b0f2fb7d5f1398054de0ae73259346d8c45ae8e2d4d6a9f487666f73e4f40354"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.292173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" event={"ID":"681c42c0-27a5-4f76-a992-1855f9fa4be1","Type":"ContainerStarted","Data":"c84a4333c923b60ee9127c6e08d4a3b410252a3721dd3046d4a033603bad7e26"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.293144 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.307737 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" event={"ID":"b00633d7-0be4-4a78-800b-d5f412366bc6","Type":"ContainerStarted","Data":"f9fea6c6ad934ef82725f73fdb744245f5d395623c5835f604cb32fd88680190"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.307782 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" event={"ID":"b00633d7-0be4-4a78-800b-d5f412366bc6","Type":"ContainerStarted","Data":"c180ca21c72108ff5dd469ca5374cbae8cf6c501399c078aadcfc8a6e024902d"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.317856 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvnk2" event={"ID":"c52a7bb7-0f41-4457-a354-be5d25881767","Type":"ContainerStarted","Data":"2b873cb6c331510d5f22e002c004b2358aad7380b44f05bdf3b5f970f452c4e4"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.317922 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvnk2" event={"ID":"c52a7bb7-0f41-4457-a354-be5d25881767","Type":"ContainerStarted","Data":"5d1617306131332bbd74fd31861b5bb5bd0ebdf0731f1ee09614d9a7eef751bc"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.319017 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.342951 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.345787 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmxsv" event={"ID":"fbc422bf-1668-470a-96a8-d94bbe3a2209","Type":"ContainerStarted","Data":"85dc2703cb92ed9f5abbe7be44881a4165f1aed926cf426f9ff9b831eece81dd"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.365224 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.369060 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" event={"ID":"13d1f1ec-a922-4d84-93b3-214bff4187c0","Type":"ContainerStarted","Data":"569c0bb9a41343c4e5d005cf3d56ada8fb076be589ba863ce4f78c5aad382682"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.380014 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvnk2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.380107 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvnk2" podUID="c52a7bb7-0f41-4457-a354-be5d25881767" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.380997 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.388887 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.888853398 +0000 UTC m=+159.510929456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: W1129 07:03:39.415920 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455fc72a_8bd9_44d9_9e09_ba1d9db0fce8.slice/crio-d4c8fd12f1baae53e67bc81e753c5fce96c867dca49675408a2d55b2a3021c05 WatchSource:0}: Error finding container d4c8fd12f1baae53e67bc81e753c5fce96c867dca49675408a2d55b2a3021c05: Status 404 returned error can't find the container with id d4c8fd12f1baae53e67bc81e753c5fce96c867dca49675408a2d55b2a3021c05 Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.490159 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.490644 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:39.990627241 +0000 UTC m=+159.612703299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502751 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" event={"ID":"cafa68b0-17e5-4a83-aefd-560d84f521ea","Type":"ContainerStarted","Data":"eb132aa48ff450b4b3d2518145264cee1f86b0cda287b6c17655939f27d34e48"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502785 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" event={"ID":"cafa68b0-17e5-4a83-aefd-560d84f521ea","Type":"ContainerStarted","Data":"8a15c3a9e04ee240936204f6804fb0a266d4ee750eff3e42ce85256b1d00ce4c"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502798 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502814 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502825 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502839 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ktplp"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.502850 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.512638 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-svfss" event={"ID":"c4a6d09c-fc2c-4c2e-8bb8-241d636981fd","Type":"ContainerStarted","Data":"49c078af203f3e559b2ddc6fc178a5f9ffd74ef84dc94b5804c83ab051789fdd"} Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.512727 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.530587 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-swjkr"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.533525 4828 patch_prober.go:28] interesting pod/console-operator-58897d9998-svfss container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.533571 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-svfss" podUID="c4a6d09c-fc2c-4c2e-8bb8-241d636981fd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 29 07:03:39 crc kubenswrapper[4828]: W1129 07:03:39.538008 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod634b47b0_ce44_446c_8f87_531a593c576b.slice/crio-259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157 WatchSource:0}: Error finding container 259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157: Status 404 returned error can't find the container with id 259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157 Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.554400 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.556834 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mvnk2" podStartSLOduration=133.556817127 podStartE2EDuration="2m13.556817127s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.552929777 +0000 UTC m=+159.175005855" watchObservedRunningTime="2025-11-29 07:03:39.556817127 +0000 UTC m=+159.178893185" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.557611 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95w8h"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.564056 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.569137 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.587183 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:39 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:39 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:39 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.587299 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.591439 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.593100 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.093075692 +0000 UTC m=+159.715151750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.601491 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.622457 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rmxsv" podStartSLOduration=133.622434029 podStartE2EDuration="2m13.622434029s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.606893528 +0000 UTC m=+159.228969586" watchObservedRunningTime="2025-11-29 07:03:39.622434029 +0000 UTC m=+159.244510087" Nov 29 07:03:39 crc kubenswrapper[4828]: W1129 07:03:39.628480 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode138e355_8c56_47a6_9008_c8679fad48d5.slice/crio-ba2ac20676da417c4f79e5bf763ab8c0ece4a468300473372249e222de480600 WatchSource:0}: Error finding container ba2ac20676da417c4f79e5bf763ab8c0ece4a468300473372249e222de480600: Status 404 returned error can't find the container with id ba2ac20676da417c4f79e5bf763ab8c0ece4a468300473372249e222de480600 Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.642463 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vt9cs" podStartSLOduration=134.642440124 podStartE2EDuration="2m14.642440124s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.641813998 +0000 UTC m=+159.263890066" watchObservedRunningTime="2025-11-29 07:03:39.642440124 +0000 UTC m=+159.264516192" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.675617 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss6dh" podStartSLOduration=133.675592339 podStartE2EDuration="2m13.675592339s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.673938266 +0000 UTC m=+159.296014344" watchObservedRunningTime="2025-11-29 07:03:39.675592339 +0000 UTC m=+159.297668407" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.694756 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.695297 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.195281006 +0000 UTC m=+159.817357064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.734595 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vjrr4" podStartSLOduration=133.734576369 podStartE2EDuration="2m13.734576369s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.721717338 +0000 UTC m=+159.343793396" watchObservedRunningTime="2025-11-29 07:03:39.734576369 +0000 UTC m=+159.356652427" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.746260 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7lwfp"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.746424 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.756547 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" podStartSLOduration=133.756525445 podStartE2EDuration="2m13.756525445s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.75557535 +0000 UTC m=+159.377651428" watchObservedRunningTime="2025-11-29 07:03:39.756525445 +0000 UTC m=+159.378601513" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.768353 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.768978 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5xwt7"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.803218 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.805475 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.305449866 +0000 UTC m=+159.927525924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.811705 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-svfss" podStartSLOduration=133.811678497 podStartE2EDuration="2m13.811678497s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:39.80520649 +0000 UTC m=+159.427282548" watchObservedRunningTime="2025-11-29 07:03:39.811678497 +0000 UTC m=+159.433754555" Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.815572 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.815965 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.315951487 +0000 UTC m=+159.938027545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.885987 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.912685 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9"] Nov 29 07:03:39 crc kubenswrapper[4828]: W1129 07:03:39.912813 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff651c8d_3ada_4888_990d_6b0edc5595f4.slice/crio-7c06152bd872c0244e98d3cb5235b90d3e8a8f550f9a62a253a4e5dab9bb2d47 WatchSource:0}: Error finding container 7c06152bd872c0244e98d3cb5235b90d3e8a8f550f9a62a253a4e5dab9bb2d47: Status 404 returned error can't find the container with id 7c06152bd872c0244e98d3cb5235b90d3e8a8f550f9a62a253a4e5dab9bb2d47 Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.914653 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5"] Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.916774 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:39 crc kubenswrapper[4828]: E1129 07:03:39.917666 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.417623428 +0000 UTC m=+160.039699486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:39 crc kubenswrapper[4828]: I1129 07:03:39.927464 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:03:39 crc kubenswrapper[4828]: W1129 07:03:39.984811 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod975b55fc_fe38_4516_bcef_5af821ad487c.slice/crio-35eeab9178f2b5b44bb0b8ff7c85fe0ef0995d2df628b63d21813a78b7c70820 WatchSource:0}: Error finding container 35eeab9178f2b5b44bb0b8ff7c85fe0ef0995d2df628b63d21813a78b7c70820: Status 404 returned error can't find the container with id 35eeab9178f2b5b44bb0b8ff7c85fe0ef0995d2df628b63d21813a78b7c70820 Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.016556 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.019225 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.020258 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.520205482 +0000 UTC m=+160.142281530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.033327 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x5w66"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.033389 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.072388 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xpv8b"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.078954 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xt2sv"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.084581 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.084655 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.089303 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx"] Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.133847 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.134773 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.634750394 +0000 UTC m=+160.256826452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: W1129 07:03:40.148911 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba8ca1a_d67d_4042_bebb_94891b81644f.slice/crio-3566f9402e04f0cb9f1b44366f98ccb1ba1accdfc7b46073ef6fad8191b41271 WatchSource:0}: Error finding container 3566f9402e04f0cb9f1b44366f98ccb1ba1accdfc7b46073ef6fad8191b41271: Status 404 returned error can't find the container with id 3566f9402e04f0cb9f1b44366f98ccb1ba1accdfc7b46073ef6fad8191b41271 Nov 29 07:03:40 crc kubenswrapper[4828]: W1129 07:03:40.167480 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac998e4b_4fb3_4c4b_8a57_48c64d7a2a0c.slice/crio-f446727606bb096ae2b99535fda7107e92f654c93b7553120224d31b2dd426b8 WatchSource:0}: Error finding container f446727606bb096ae2b99535fda7107e92f654c93b7553120224d31b2dd426b8: Status 404 returned error can't find the container with id f446727606bb096ae2b99535fda7107e92f654c93b7553120224d31b2dd426b8 Nov 29 07:03:40 crc kubenswrapper[4828]: W1129 07:03:40.174711 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3fcb52_17ea_44d8_b364_1ca524a05878.slice/crio-3edd7b4c3ddd178cc1d7f7dd0b6af188284ca0f438211b21714107292d0b55f2 WatchSource:0}: Error finding container 3edd7b4c3ddd178cc1d7f7dd0b6af188284ca0f438211b21714107292d0b55f2: Status 404 returned error can't find the container with id 3edd7b4c3ddd178cc1d7f7dd0b6af188284ca0f438211b21714107292d0b55f2 Nov 29 07:03:40 crc kubenswrapper[4828]: W1129 07:03:40.205418 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a310a8f_1e39_4e6f_8c94_e053124e444d.slice/crio-64db65477beb2c38315668619eeb54449e3fc20fe9905aecc8526d3f5ecb0e76 WatchSource:0}: Error finding container 64db65477beb2c38315668619eeb54449e3fc20fe9905aecc8526d3f5ecb0e76: Status 404 returned error can't find the container with id 64db65477beb2c38315668619eeb54449e3fc20fe9905aecc8526d3f5ecb0e76 Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.235572 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.235955 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.735940253 +0000 UTC m=+160.358016311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.336049 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.336462 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.836442113 +0000 UTC m=+160.458518171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.439851 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.440172 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:40.940159017 +0000 UTC m=+160.562235075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.543350 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.544377 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.044343282 +0000 UTC m=+160.666419380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.548699 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" event={"ID":"ec00b335-adab-4b39-a98e-b68fdb402a27","Type":"ContainerStarted","Data":"2acdc7ebb6686913699e7efea18edb8f3c525f775f57fa63ffc6b8681a02ad3a"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.557838 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" event={"ID":"d02f8ae1-0dd2-41de-852e-1bd55a992cf1","Type":"ContainerStarted","Data":"17397d99add486e443f50f4321dabc89835e1874256bef58372d1bbd4e169837"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.562128 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:40 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:40 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:40 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.562197 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.568200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xt2sv" event={"ID":"5269b919-32d1-403a-b90d-f63894e9be39","Type":"ContainerStarted","Data":"97b6805ae20738eaa7b27cc1df0e235d225ee794e448d4010b625e8ebed1346d"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.577370 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7njjk" podStartSLOduration=134.577349713 podStartE2EDuration="2m14.577349713s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.576735697 +0000 UTC m=+160.198811765" watchObservedRunningTime="2025-11-29 07:03:40.577349713 +0000 UTC m=+160.199425801" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.582726 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" event={"ID":"118d01c2-66e7-465e-910e-7a53a3516b56","Type":"ContainerStarted","Data":"8729eea920bd3b965525caa873380403c5fb0e6222325158e667afa7ecb9af5d"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.582779 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" event={"ID":"118d01c2-66e7-465e-910e-7a53a3516b56","Type":"ContainerStarted","Data":"8448fb97e4c4e4ac3e0e8c20ca9151407c36178c332e1ef80c6447e761aa5c0e"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.609921 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" event={"ID":"13d1f1ec-a922-4d84-93b3-214bff4187c0","Type":"ContainerStarted","Data":"e1064c3f340692f75c7ccbb4d4133b2f28c21ad5a1280b26756764787c1ce781"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.647090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.647412 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.147400319 +0000 UTC m=+160.769476377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.647782 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" event={"ID":"aa962b58-6ac1-4c82-86e5-d89b29f40391","Type":"ContainerStarted","Data":"e630d8e0eb987d7d16cf6bf4bb97c92ea1bc34b22c9b1f206ad91d456ca195d5"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.654432 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" event={"ID":"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8","Type":"ContainerStarted","Data":"57825667d7c97b65f7a56ead49c730bfc9c785ac1c5f7f3bd907d8d8ca27b79f"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.655339 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" podStartSLOduration=134.655327893 podStartE2EDuration="2m14.655327893s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.654769749 +0000 UTC m=+160.276845817" watchObservedRunningTime="2025-11-29 07:03:40.655327893 +0000 UTC m=+160.277403941" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.664032 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95w8h" podStartSLOduration=134.664006157 podStartE2EDuration="2m14.664006157s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.620856295 +0000 UTC m=+160.242932353" watchObservedRunningTime="2025-11-29 07:03:40.664006157 +0000 UTC m=+160.286082215" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.673288 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" event={"ID":"80057f69-af41-4b81-adf4-b8851e70294f","Type":"ContainerStarted","Data":"729670dc889c13a67e65cc911c5e0b8fd94c9d78ed5f2bfb68eecc2ec241522e"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.680493 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerStarted","Data":"3566f9402e04f0cb9f1b44366f98ccb1ba1accdfc7b46073ef6fad8191b41271"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.763634 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.763912 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.263878366 +0000 UTC m=+160.885954434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.764069 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.764222 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" event={"ID":"79d7569d-1e02-4c21-af59-f692827931a9","Type":"ContainerStarted","Data":"ff5e93e71a8725b2cd2bdbc8c1a9e77d0db11a54681742b20d0d93dfd4d5585b"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.764295 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" event={"ID":"79d7569d-1e02-4c21-af59-f692827931a9","Type":"ContainerStarted","Data":"2f620413a148a078cea0c856866dd5c908fd034ae6feb14146598ed55a698f8f"} Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.765659 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.265645602 +0000 UTC m=+160.887721660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.804360 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" event={"ID":"d464f5b3-e407-4711-9fcf-823eb7ae866d","Type":"ContainerStarted","Data":"e4aeb4232aaf965c3f53a27dd98a8547a1af678c7ba4120cb70bca7e0bec7b7f"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.829040 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" podStartSLOduration=134.829022074 podStartE2EDuration="2m14.829022074s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.827608677 +0000 UTC m=+160.449684745" watchObservedRunningTime="2025-11-29 07:03:40.829022074 +0000 UTC m=+160.451098132" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.841678 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" event={"ID":"eb6b6e45-3101-4755-a294-ad55096f3483","Type":"ContainerStarted","Data":"35989ddf6cf550063919aec56311096f571b24000d80c3cf47fcbfe5b4dd00a0"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.841724 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" event={"ID":"eb6b6e45-3101-4755-a294-ad55096f3483","Type":"ContainerStarted","Data":"b4a07e6c7807dea72f0d740d90e315d777c64002c6e6248c5968786760b98da8"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.852755 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vbf7" event={"ID":"78cb844a-3bae-4cd2-9fb8-63f20fec1755","Type":"ContainerStarted","Data":"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.852802 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vbf7" event={"ID":"78cb844a-3bae-4cd2-9fb8-63f20fec1755","Type":"ContainerStarted","Data":"f3bc91b6d2235fe32c1d2a278557c8b143268241357f0526b8de33038381972a"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.856128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" event={"ID":"ff651c8d-3ada-4888-990d-6b0edc5595f4","Type":"ContainerStarted","Data":"7c06152bd872c0244e98d3cb5235b90d3e8a8f550f9a62a253a4e5dab9bb2d47"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.862858 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" event={"ID":"13bf3905-e3c4-4b60-a233-d459262f9b98","Type":"ContainerStarted","Data":"b5c8f0a6bfaa5824410552672887091a5a3f8d59cfd550b5683eb4a54d2175cc"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.862918 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" event={"ID":"13bf3905-e3c4-4b60-a233-d459262f9b98","Type":"ContainerStarted","Data":"5f833d3e6d4a3928e127a65f7c2eebd685097b1d18fb5f489b487e6b9eb40e5a"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.864739 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.865872 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.866221 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.366203851 +0000 UTC m=+160.988279909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.883849 4828 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nz25w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.883915 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.884843 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" event={"ID":"c282f664-abb6-4151-83a5-badb4471d931","Type":"ContainerStarted","Data":"8d3efb32eda2ed0affcca3ee368c3963baf143c4c0adbd0c27d15a9664ad101e"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.889965 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" event={"ID":"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e","Type":"ContainerStarted","Data":"93d492cc71c1ecfc3cfcc1e0dc4ca9e307f7c32d4405d547ba8b306428741784"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.895867 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9vbf7" podStartSLOduration=134.895843014 podStartE2EDuration="2m14.895843014s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.893904604 +0000 UTC m=+160.515980672" watchObservedRunningTime="2025-11-29 07:03:40.895843014 +0000 UTC m=+160.517919072" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.914764 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" event={"ID":"634b47b0-ce44-446c-8f87-531a593c576b","Type":"ContainerStarted","Data":"259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.918950 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" podStartSLOduration=134.918931569 podStartE2EDuration="2m14.918931569s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:40.91741098 +0000 UTC m=+160.539487038" watchObservedRunningTime="2025-11-29 07:03:40.918931569 +0000 UTC m=+160.541007627" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.938624 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" event={"ID":"03f7edb8-ded1-483c-81d1-d75417a3dbdc","Type":"ContainerStarted","Data":"a646a41c0f1ca52e9e9c9e4c7ea2710d12ba102ae629881ec6a8e6f4ac0fef28"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.939640 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.960587 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" event={"ID":"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c","Type":"ContainerStarted","Data":"f446727606bb096ae2b99535fda7107e92f654c93b7553120224d31b2dd426b8"} Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.967163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:40 crc kubenswrapper[4828]: E1129 07:03:40.967547 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.467529531 +0000 UTC m=+161.089605589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:40 crc kubenswrapper[4828]: I1129 07:03:40.978678 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" event={"ID":"9a7e6cb9-6c64-425d-92fe-f067a47489ac","Type":"ContainerStarted","Data":"b5dfad439bcf6df194ff33b10e11338021370d0c94eb8b1d3fa9740f0eafb097"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.015775 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.016375 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" podStartSLOduration=135.016358079 podStartE2EDuration="2m15.016358079s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.014850051 +0000 UTC m=+160.636926099" watchObservedRunningTime="2025-11-29 07:03:41.016358079 +0000 UTC m=+160.638434127" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.019405 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpv8b" event={"ID":"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c","Type":"ContainerStarted","Data":"5a2244a790fa275aa4296eb4100457296b24705f9f6d2a55939696eec6aedf1b"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.028073 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" event={"ID":"8a310a8f-1e39-4e6f-8c94-e053124e444d","Type":"ContainerStarted","Data":"64db65477beb2c38315668619eeb54449e3fc20fe9905aecc8526d3f5ecb0e76"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.071559 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.073281 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.573245825 +0000 UTC m=+161.195321883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.110117 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" event={"ID":"6c3fcb52-17ea-44d8-b364-1ca524a05878","Type":"ContainerStarted","Data":"3edd7b4c3ddd178cc1d7f7dd0b6af188284ca0f438211b21714107292d0b55f2"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.134013 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" event={"ID":"cafa68b0-17e5-4a83-aefd-560d84f521ea","Type":"ContainerStarted","Data":"d8654611f3d0cb1ad2241ee7c60aef395d68d971b8a06ac7bfd0f412fd453c46"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.137501 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" event={"ID":"d6b2a61f-b080-46c7-a007-6108a359afe7","Type":"ContainerStarted","Data":"97c3a29c91ffa3af38a1cb8e778864221cb73b965301a6d7a069a2a6a01aafe1"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.137539 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" event={"ID":"d6b2a61f-b080-46c7-a007-6108a359afe7","Type":"ContainerStarted","Data":"2240fc64e5bc76feb9392b1e0397d78af549e114f3cdb3463a61b5a6e13a6955"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.138713 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" podStartSLOduration=135.138697202 podStartE2EDuration="2m15.138697202s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.090723515 +0000 UTC m=+160.712799573" watchObservedRunningTime="2025-11-29 07:03:41.138697202 +0000 UTC m=+160.760773260" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.140986 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" event={"ID":"3ac75381-8d8e-408c-806f-59c59ca888df","Type":"ContainerStarted","Data":"62c037fc19c8138e7fff50384e27e63645864d7492c14d0949c78a822f1e77ea"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.169035 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" event={"ID":"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8","Type":"ContainerStarted","Data":"01348ff1f460eeaca9be51b2c6b177067e7f96ef95f3cc0470a8189b8faabd9b"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.169097 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" event={"ID":"455fc72a-8bd9-44d9-9e09-ba1d9db0fce8","Type":"ContainerStarted","Data":"d4c8fd12f1baae53e67bc81e753c5fce96c867dca49675408a2d55b2a3021c05"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.173096 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.176135 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.676117037 +0000 UTC m=+161.298193175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.178666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" event={"ID":"975b55fc-fe38-4516-bcef-5af821ad487c","Type":"ContainerStarted","Data":"35eeab9178f2b5b44bb0b8ff7c85fe0ef0995d2df628b63d21813a78b7c70820"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.187718 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" podStartSLOduration=135.187699395 podStartE2EDuration="2m15.187699395s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.183940668 +0000 UTC m=+160.806016726" watchObservedRunningTime="2025-11-29 07:03:41.187699395 +0000 UTC m=+160.809775453" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.191989 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" event={"ID":"e138e355-8c56-47a6-9008-c8679fad48d5","Type":"ContainerStarted","Data":"b20da09276b0e93fb831d36db53dc82a0dc722ceb1ab2112f563f39bf99ee9a3"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.192045 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" event={"ID":"e138e355-8c56-47a6-9008-c8679fad48d5","Type":"ContainerStarted","Data":"ba2ac20676da417c4f79e5bf763ab8c0ece4a468300473372249e222de480600"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.193139 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.210078 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" event={"ID":"6d8629cd-6b91-47d6-be66-cc036042a6e8","Type":"ContainerStarted","Data":"e85c261e651a478912307cfefe1dff4940e1ded9c635bf7cb0b8c2ff84bbc6f1"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.232203 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bwcm4" podStartSLOduration=135.232175532 podStartE2EDuration="2m15.232175532s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.223723024 +0000 UTC m=+160.845799082" watchObservedRunningTime="2025-11-29 07:03:41.232175532 +0000 UTC m=+160.854251600" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.233218 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvnk2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.233286 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvnk2" podUID="c52a7bb7-0f41-4457-a354-be5d25881767" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.233449 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" event={"ID":"d42c676c-5d0d-41e6-a7d9-51ec413d3b45","Type":"ContainerStarted","Data":"527ae2fe42345275362d862769e9d7a907b6fcb6275ee60042dae63ae9dcfb00"} Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.253360 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.267754 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-svfss" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.273984 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.275363 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.775342174 +0000 UTC m=+161.397418232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.278885 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4fnwr" podStartSLOduration=135.278860565 podStartE2EDuration="2m15.278860565s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.27244942 +0000 UTC m=+160.894525498" watchObservedRunningTime="2025-11-29 07:03:41.278860565 +0000 UTC m=+160.900936643" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.279018 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f282h" podStartSLOduration=135.279013309 podStartE2EDuration="2m15.279013309s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.245935176 +0000 UTC m=+160.868011234" watchObservedRunningTime="2025-11-29 07:03:41.279013309 +0000 UTC m=+160.901089367" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.364539 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jwfr9" podStartSLOduration=135.364518623 podStartE2EDuration="2m15.364518623s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:41.31706585 +0000 UTC m=+160.939141908" watchObservedRunningTime="2025-11-29 07:03:41.364518623 +0000 UTC m=+160.986594681" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.375480 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.376034 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.87601765 +0000 UTC m=+161.498093708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.476456 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.476874 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:41.976858412 +0000 UTC m=+161.598934470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.487601 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.487829 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.563063 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:41 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:41 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:41 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.563137 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.579600 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.579981 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.079968217 +0000 UTC m=+161.702044275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.680659 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.680919 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.180881738 +0000 UTC m=+161.802957796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.782628 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.783095 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.283071582 +0000 UTC m=+161.905147640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.886436 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.886799 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.386782025 +0000 UTC m=+162.008858083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.968892 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.969867 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.977029 4828 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bdxmg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]log ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]etcd ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/max-in-flight-filter ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 29 07:03:41 crc kubenswrapper[4828]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 29 07:03:41 crc kubenswrapper[4828]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/project.openshift.io-projectcache ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/openshift.io-startinformers ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 29 07:03:41 crc kubenswrapper[4828]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:03:41 crc kubenswrapper[4828]: livez check failed Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.977088 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" podUID="13d1f1ec-a922-4d84-93b3-214bff4187c0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:41 crc kubenswrapper[4828]: E1129 07:03:41.988404 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.488385514 +0000 UTC m=+162.110461572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:41 crc kubenswrapper[4828]: I1129 07:03:41.988451 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.070406 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.070872 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.089286 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.089737 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.589708536 +0000 UTC m=+162.211784624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.091649 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.190221 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.190501 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.690489034 +0000 UTC m=+162.312565092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.251617 4828 generic.go:334] "Generic (PLEG): container finished" podID="c282f664-abb6-4151-83a5-badb4471d931" containerID="ff99533dad7f1f6d516d8002e9c704555eeb2510b3f865e0bd10309f9a8c9f90" exitCode=0 Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.251726 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" event={"ID":"c282f664-abb6-4151-83a5-badb4471d931","Type":"ContainerDied","Data":"ff99533dad7f1f6d516d8002e9c704555eeb2510b3f865e0bd10309f9a8c9f90"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.251753 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" event={"ID":"c282f664-abb6-4151-83a5-badb4471d931","Type":"ContainerStarted","Data":"5d275d2dd43dc61630943b4049f39f8acd61d7c09d8364e0b80e3f2638ec1666"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.252900 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.261683 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xt2sv" event={"ID":"5269b919-32d1-403a-b90d-f63894e9be39","Type":"ContainerStarted","Data":"ffec13e6f0dafa83e4569fe175411cd9e904967f018953747fe79cbd1ea85c6c"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.280312 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerStarted","Data":"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.280880 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" podStartSLOduration=136.280857984 podStartE2EDuration="2m16.280857984s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.279780806 +0000 UTC m=+161.901856864" watchObservedRunningTime="2025-11-29 07:03:42.280857984 +0000 UTC m=+161.902934042" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.281275 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.293905 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.294253 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.794236598 +0000 UTC m=+162.416312666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.295079 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmxx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.295117 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.308173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" event={"ID":"634b47b0-ce44-446c-8f87-531a593c576b","Type":"ContainerStarted","Data":"5f4e3d8563cc18899f9777785bb6fa3e9dfc253c4496cbf8b653ce938561f65b"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.317590 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xt2sv" podStartSLOduration=8.31757245 podStartE2EDuration="8.31757245s" podCreationTimestamp="2025-11-29 07:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.31369222 +0000 UTC m=+161.935768278" watchObservedRunningTime="2025-11-29 07:03:42.31757245 +0000 UTC m=+161.939648508" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.355147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" event={"ID":"c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e","Type":"ContainerStarted","Data":"db278b1b388fddc378cad50f1a642d1c3dfdc33bf6f4026f788769cde622c4d1"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.355561 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.359761 4828 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlzkw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.359816 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" podUID="c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.362332 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podStartSLOduration=136.362315393 podStartE2EDuration="2m16.362315393s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.360008384 +0000 UTC m=+161.982084442" watchObservedRunningTime="2025-11-29 07:03:42.362315393 +0000 UTC m=+161.984391451" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.384623 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" event={"ID":"d02f8ae1-0dd2-41de-852e-1bd55a992cf1","Type":"ContainerStarted","Data":"3d772bb54b3009945a0dc7831332811a79daa45d820fc8c23e37f2999ecf448f"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.394840 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" event={"ID":"975b55fc-fe38-4516-bcef-5af821ad487c","Type":"ContainerStarted","Data":"913758e87cef0193ce66422f2e5a22a098ea739097b9420d0cf069feeba70ae8"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.395053 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.395611 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.396379 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:42.896360851 +0000 UTC m=+162.518436909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.401661 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpv8b" event={"ID":"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c","Type":"ContainerStarted","Data":"c22813b16b5dde38d1074746bac1f181058844f1445a7ebc28dbee1df4af1d81"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.408825 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" event={"ID":"3ac75381-8d8e-408c-806f-59c59ca888df","Type":"ContainerStarted","Data":"cf7ed6c06c5a3239067890237f19742e9620ea4c71a4f3af879115e68bf6a127"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.408874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" event={"ID":"3ac75381-8d8e-408c-806f-59c59ca888df","Type":"ContainerStarted","Data":"13ae72da629e4e0431d811324923c6adae4204f2c802161a752b07072460c064"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.438621 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" event={"ID":"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8","Type":"ContainerStarted","Data":"beb7e9a0b0417437ea47c4e0a2be6a5d42868a422c9b06f9dbe9c3b3357fb73b"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.438682 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" event={"ID":"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8","Type":"ContainerStarted","Data":"4fecf197028ce57e06fde76dd2882ba807d8a68d6e07ba631a3ebfd60845bc4c"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.446236 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-447zw" event={"ID":"6c3fcb52-17ea-44d8-b364-1ca524a05878","Type":"ContainerStarted","Data":"04a1ad05eb833cb78eb802ee0e742ec4975fdf264616b08cd9a15f6486de1617"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.465138 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" event={"ID":"aa962b58-6ac1-4c82-86e5-d89b29f40391","Type":"ContainerStarted","Data":"aff09720cfe4014ebba4beb4b01aadd3e7e9d6140f48a37fe301543cb3bb96f9"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.493572 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" event={"ID":"9a7e6cb9-6c64-425d-92fe-f067a47489ac","Type":"ContainerStarted","Data":"1ff1e2b750c091b277b63929d1cee9b609c1b765a9f3fc4bc2253026ae282ecd"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.495847 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.510145 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.010110453 +0000 UTC m=+162.632186511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.528746 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" event={"ID":"79d7569d-1e02-4c21-af59-f692827931a9","Type":"ContainerStarted","Data":"fcb959a9e430c3be32552e5cb8cee34ca9b32b92fcf530002444dc33fab8ebe1"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.529926 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" podStartSLOduration=136.529901223 podStartE2EDuration="2m16.529901223s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.428887669 +0000 UTC m=+162.050963727" watchObservedRunningTime="2025-11-29 07:03:42.529901223 +0000 UTC m=+162.151977281" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.530743 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" podStartSLOduration=136.530737235 podStartE2EDuration="2m16.530737235s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.527546992 +0000 UTC m=+162.149623070" watchObservedRunningTime="2025-11-29 07:03:42.530737235 +0000 UTC m=+162.152813293" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.530916 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" event={"ID":"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c","Type":"ContainerStarted","Data":"0e0fc7b236c50713e38931e0270b56efae2940ead7b8ff42b27f6429dd039d40"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.544439 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" event={"ID":"8a310a8f-1e39-4e6f-8c94-e053124e444d","Type":"ContainerStarted","Data":"9ad37daa147caf71e61f7a48c0c224f6720ecd9fcd9063091b08c18fa918184a"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.545713 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.547720 4828 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xfngx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.547785 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" podUID="8a310a8f-1e39-4e6f-8c94-e053124e444d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.548207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" event={"ID":"ff651c8d-3ada-4888-990d-6b0edc5595f4","Type":"ContainerStarted","Data":"995b1ef007035076d6a788f9adbb3fdc608577793b704be828e8ed34d6438720"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.551146 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" event={"ID":"80057f69-af41-4b81-adf4-b8851e70294f","Type":"ContainerStarted","Data":"94bf9ec8a85ada1ce7e2ff33059f2c073c82812f4f9d0dad40515d54abfcb94d"} Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.565025 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvnk2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.565090 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvnk2" podUID="c52a7bb7-0f41-4457-a354-be5d25881767" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.565119 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qvpk" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.571502 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:42 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:42 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:42 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.571546 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.601430 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.614148 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.615863 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.115840818 +0000 UTC m=+162.737916936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.644168 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gscbf" podStartSLOduration=136.644141828 podStartE2EDuration="2m16.644141828s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.638146523 +0000 UTC m=+162.260222581" watchObservedRunningTime="2025-11-29 07:03:42.644141828 +0000 UTC m=+162.266217886" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.715640 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.721296 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.221260686 +0000 UTC m=+162.843336744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.818069 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.818475 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.318460741 +0000 UTC m=+162.940536799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.878978 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5xwt7" podStartSLOduration=136.878953791 podStartE2EDuration="2m16.878953791s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.728917853 +0000 UTC m=+162.350993901" watchObservedRunningTime="2025-11-29 07:03:42.878953791 +0000 UTC m=+162.501029859" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.921415 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:42 crc kubenswrapper[4828]: E1129 07:03:42.921933 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.421912338 +0000 UTC m=+163.043988396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.942306 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" podStartSLOduration=136.942286043 podStartE2EDuration="2m16.942286043s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.880916971 +0000 UTC m=+162.502993049" watchObservedRunningTime="2025-11-29 07:03:42.942286043 +0000 UTC m=+162.564362111" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.944136 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.956051 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.967778 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:03:42 crc kubenswrapper[4828]: I1129 07:03:42.977334 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.002460 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" podStartSLOduration=137.002428893 podStartE2EDuration="2m17.002428893s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:42.967564545 +0000 UTC m=+162.589640623" watchObservedRunningTime="2025-11-29 07:03:43.002428893 +0000 UTC m=+162.624504951" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.012349 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hz5qh" podStartSLOduration=137.012326038 podStartE2EDuration="2m17.012326038s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.001937411 +0000 UTC m=+162.624013499" watchObservedRunningTime="2025-11-29 07:03:43.012326038 +0000 UTC m=+162.634402096" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.024874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.025323 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.525309083 +0000 UTC m=+163.147385141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.048680 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7lwfp" podStartSLOduration=137.048662235 podStartE2EDuration="2m17.048662235s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.047652949 +0000 UTC m=+162.669729017" watchObservedRunningTime="2025-11-29 07:03:43.048662235 +0000 UTC m=+162.670738293" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.081061 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.082036 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.110425 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.113323 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.122627 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s2ds9" podStartSLOduration=137.122604621 podStartE2EDuration="2m17.122604621s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.120888277 +0000 UTC m=+162.742964345" watchObservedRunningTime="2025-11-29 07:03:43.122604621 +0000 UTC m=+162.744680689" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.125514 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.125803 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ts8\" (UniqueName: \"kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.125876 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.125947 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.126039 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.626022549 +0000 UTC m=+163.248098607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.266805 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" podStartSLOduration=137.266769517 podStartE2EDuration="2m17.266769517s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.260240099 +0000 UTC m=+162.882316177" watchObservedRunningTime="2025-11-29 07:03:43.266769517 +0000 UTC m=+162.888845575" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.267796 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.267928 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.267979 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7ts8\" (UniqueName: \"kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.268085 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.268165 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.268297 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm555\" (UniqueName: \"kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.268365 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.268971 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.768952033 +0000 UTC m=+163.391028091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.269762 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.270479 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.305249 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.325380 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.330238 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.338289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7ts8\" (UniqueName: \"kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8\") pod \"community-operators-vpwkr\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374568 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374833 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374905 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm555\" (UniqueName: \"kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374971 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mv5\" (UniqueName: \"kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.374994 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.375010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.375107 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.875089989 +0000 UTC m=+163.497166047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.375482 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.375934 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.397848 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9qj8" podStartSLOduration=137.397829816 podStartE2EDuration="2m17.397829816s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.394041448 +0000 UTC m=+163.016117506" watchObservedRunningTime="2025-11-29 07:03:43.397829816 +0000 UTC m=+163.019905874" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.421597 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm555\" (UniqueName: \"kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555\") pod \"certified-operators-db4cv\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.439761 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.468597 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" podStartSLOduration=137.468577539 podStartE2EDuration="2m17.468577539s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.465763637 +0000 UTC m=+163.087839695" watchObservedRunningTime="2025-11-29 07:03:43.468577539 +0000 UTC m=+163.090653587" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.482150 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.483180 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.483749 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.483838 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mv5\" (UniqueName: \"kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.483863 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.483928 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.484402 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.484671 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.485015 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:43.984993542 +0000 UTC m=+163.607069680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.550709 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.565469 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:43 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:43 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:43 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.565804 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.567965 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mv5\" (UniqueName: \"kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5\") pod \"community-operators-jxws9\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.584535 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.584758 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9v4\" (UniqueName: \"kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.585016 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.08498629 +0000 UTC m=+163.707062348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.586732 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" event={"ID":"975b55fc-fe38-4516-bcef-5af821ad487c","Type":"ContainerStarted","Data":"fd9d63cfafbec511653ced742f997a60d24278fbf0369506baa467f49970c269"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.588982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" event={"ID":"eb6b6e45-3101-4755-a294-ad55096f3483","Type":"ContainerStarted","Data":"c8572848fe9e366fb063b35756231e62a8800bb96f4e0cd1c6822eebe6f02757"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.584784 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.589392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.589449 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.589810 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.089795034 +0000 UTC m=+163.711871092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.611054 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwhr9" event={"ID":"ac998e4b-4fb3-4c4b-8a57-48c64d7a2a0c","Type":"ContainerStarted","Data":"aebc90fd29c907f9bf0f835fece4e5db05150b08a6a40063c62583381ef391c9"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.618615 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.722256 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.723051 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.723670 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.723957 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.22387508 +0000 UTC m=+163.845951188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.724118 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9v4\" (UniqueName: \"kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.724183 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.724822 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.725173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.752591 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpv8b" event={"ID":"2e1d4ff2-56bf-4f1d-af2e-e7c9c1d5289c","Type":"ContainerStarted","Data":"bd54e586a4c56cccb900fbfc190886b7be6a5b2e3f145bbb6f4e5a943d68d7d0"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.753971 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.790651 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9v4\" (UniqueName: \"kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4\") pod \"certified-operators-b2qvr\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.835559 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" event={"ID":"6d8629cd-6b91-47d6-be66-cc036042a6e8","Type":"ContainerStarted","Data":"71f11f4486f6d4b3ffeef94e1a9026e464a51f239399d626684ea57450a9c5b6"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.835609 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" event={"ID":"6d8629cd-6b91-47d6-be66-cc036042a6e8","Type":"ContainerStarted","Data":"1fdae03a6145fcdfa3c7e574c8dcaf1d9f934eef0e474d28d1f741e5575ed39e"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.845481 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.847441 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.847964 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.347941038 +0000 UTC m=+163.970017176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.885260 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ktplp" podStartSLOduration=137.885226519 podStartE2EDuration="2m17.885226519s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.807648039 +0000 UTC m=+163.429724097" watchObservedRunningTime="2025-11-29 07:03:43.885226519 +0000 UTC m=+163.507302577" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.886197 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" event={"ID":"d42c676c-5d0d-41e6-a7d9-51ec413d3b45","Type":"ContainerStarted","Data":"888ce7423117b5f117b9360a20882f81abc98bc9e39dcd756a63419250bb89a4"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.934835 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-57mhg" event={"ID":"aa962b58-6ac1-4c82-86e5-d89b29f40391","Type":"ContainerStarted","Data":"001ff51e94540b51e107a87a263eedcc1e5a010a6c12a22692ffae35020f0b8d"} Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.949814 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmxx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.950069 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.951931 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xpv8b" podStartSLOduration=9.951914128 podStartE2EDuration="9.951914128s" podCreationTimestamp="2025-11-29 07:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.886584454 +0000 UTC m=+163.508660512" watchObservedRunningTime="2025-11-29 07:03:43.951914128 +0000 UTC m=+163.573990186" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.952225 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fkg5" podStartSLOduration=137.952221886 podStartE2EDuration="2m17.952221886s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:43.950353788 +0000 UTC m=+163.572429846" watchObservedRunningTime="2025-11-29 07:03:43.952221886 +0000 UTC m=+163.574297944" Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.968011 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:43 crc kubenswrapper[4828]: I1129 07:03:43.970111 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xfngx" Nov 29 07:03:43 crc kubenswrapper[4828]: E1129 07:03:43.970639 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.47062076 +0000 UTC m=+164.092696818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.071959 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.072706 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.572690621 +0000 UTC m=+164.194766679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.177167 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.177587 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.677570165 +0000 UTC m=+164.299646223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.279207 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.279646 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.779632046 +0000 UTC m=+164.401708104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.382859 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.383194 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.883167135 +0000 UTC m=+164.505243193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.484365 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.484747 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:44.984734563 +0000 UTC m=+164.606810621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.564549 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.586075 4828 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.586525 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.587106 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:45.087081451 +0000 UTC m=+164.709157519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.687889 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.688344 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:03:45.18832341 +0000 UTC m=+164.810399498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6p6v" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.699453 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:44 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:44 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:44 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.699526 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.799207 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:44 crc kubenswrapper[4828]: E1129 07:03:44.799684 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:03:45.299650769 +0000 UTC m=+164.921726827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.849414 4828 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-29T07:03:44.586110846Z","Handler":null,"Name":""} Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.867128 4828 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.867199 4828 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.910160 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.938825 4828 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlzkw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.938891 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" podUID="c818ca52-9ce9-4fb0-8f1d-c27d8242ff1e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.966398 4828 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.966454 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.967456 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.972750 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerStarted","Data":"03a4d399bd5339c4e06a1ccb3da366be0ee7cfa0375c5a0e63dfce6593dde172"} Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.974232 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmxx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 29 07:03:44 crc kubenswrapper[4828]: I1129 07:03:44.974312 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.052850 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.170170 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:03:45 crc kubenswrapper[4828]: W1129 07:03:45.185476 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5bb383_f3ed_43cd_b62c_38d3e2922f11.slice/crio-578b5ccc91e7a8325c100fc75b1ae7a84f48368ac9472de97261c8ad64124d68 WatchSource:0}: Error finding container 578b5ccc91e7a8325c100fc75b1ae7a84f48368ac9472de97261c8ad64124d68: Status 404 returned error can't find the container with id 578b5ccc91e7a8325c100fc75b1ae7a84f48368ac9472de97261c8ad64124d68 Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.251839 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6p6v\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.262181 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.263552 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.270977 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.299215 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.324820 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.344640 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.359165 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.427318 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.427406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.427438 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bzk\" (UniqueName: \"kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.434190 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.532108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.532182 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.532200 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bzk\" (UniqueName: \"kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.533250 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.533495 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.566226 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bzk\" (UniqueName: \"kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk\") pod \"redhat-marketplace-vktx7\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.581387 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:45 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:45 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:45 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.581450 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.646887 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.654542 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.662326 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.773883 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.837886 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.838317 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.838349 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vnz\" (UniqueName: \"kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.939385 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.939460 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.939489 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vnz\" (UniqueName: \"kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.940227 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.940484 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.969982 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vnz\" (UniqueName: \"kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz\") pod \"redhat-marketplace-gh2x8\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.976569 4828 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-swjkr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.976899 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" podUID="c282f664-abb6-4151-83a5-badb4471d931" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:03:45 crc kubenswrapper[4828]: I1129 07:03:45.984339 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.002563 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.039718 4828 generic.go:334] "Generic (PLEG): container finished" podID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerID="abea0050fe7ba1da805e8d49f283380724ded4b9a8d3ec1bf595ce67bd2313c8" exitCode=0 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.040694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerDied","Data":"abea0050fe7ba1da805e8d49f283380724ded4b9a8d3ec1bf595ce67bd2313c8"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.040730 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerStarted","Data":"578b5ccc91e7a8325c100fc75b1ae7a84f48368ac9472de97261c8ad64124d68"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.047456 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.052372 4828 generic.go:334] "Generic (PLEG): container finished" podID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerID="e760616a0e4c4285d330aaad58e30718487092dbc67c9f02c413f490e0373c65" exitCode=0 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.052593 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerDied","Data":"e760616a0e4c4285d330aaad58e30718487092dbc67c9f02c413f490e0373c65"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.052718 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerStarted","Data":"e3a991bcd28ae647611cb7e04760352853d7c4d777abb2312867645bd31949a9"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.056159 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.057575 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.068777 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.082373 4828 generic.go:334] "Generic (PLEG): container finished" podID="0a44e830-89c8-428e-ab90-d8936c069de4" containerID="107985ab855786e5d558ca78e90711c98985c57920b2194ca91a3846905a4771" exitCode=0 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.082655 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerDied","Data":"107985ab855786e5d558ca78e90711c98985c57920b2194ca91a3846905a4771"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.082770 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.090889 4828 generic.go:334] "Generic (PLEG): container finished" podID="634b47b0-ce44-446c-8f87-531a593c576b" containerID="5f4e3d8563cc18899f9777785bb6fa3e9dfc253c4496cbf8b653ce938561f65b" exitCode=0 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.090984 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" event={"ID":"634b47b0-ce44-446c-8f87-531a593c576b","Type":"ContainerDied","Data":"5f4e3d8563cc18899f9777785bb6fa3e9dfc253c4496cbf8b653ce938561f65b"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.096326 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" event={"ID":"6d8629cd-6b91-47d6-be66-cc036042a6e8","Type":"ContainerStarted","Data":"e14c5ec61ad2ce94a0dfd4b5305a30be30bcdb8737490d091b7aae2614b27519"} Nov 29 07:03:46 crc kubenswrapper[4828]: W1129 07:03:46.137635 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23e223_6e12_45ff_80b3_1e65d6c36960.slice/crio-119931424c2ef8abaad1b97730953b233a5bdfd2a34382b960ffe6c1a749ea2d WatchSource:0}: Error finding container 119931424c2ef8abaad1b97730953b233a5bdfd2a34382b960ffe6c1a749ea2d: Status 404 returned error can't find the container with id 119931424c2ef8abaad1b97730953b233a5bdfd2a34382b960ffe6c1a749ea2d Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.138258 4828 generic.go:334] "Generic (PLEG): container finished" podID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerID="ffff0fbcb978a51f0a4740c11383b0ca85ba6ec5be605de812a0b9403e6dfa4d" exitCode=0 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.139731 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerDied","Data":"ffff0fbcb978a51f0a4740c11383b0ca85ba6ec5be605de812a0b9403e6dfa4d"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.139777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerStarted","Data":"b783a3108dfb3cab40e52d83436f6c901942945371bb78d610eea0e31826f1a0"} Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.251109 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.251237 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znxdd\" (UniqueName: \"kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.251281 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.299312 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.333977 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.355604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.355746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.355786 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znxdd\" (UniqueName: \"kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.357198 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.357564 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.411410 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.441826 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znxdd\" (UniqueName: \"kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd\") pod \"redhat-operators-r5hqw\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.457878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xkj7\" (UniqueName: \"kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.457967 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.458028 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.498255 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.507052 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-swjkr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.576193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xkj7\" (UniqueName: \"kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.576285 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.576349 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.577211 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.577846 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.590539 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:46 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:46 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:46 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.590637 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:46 crc kubenswrapper[4828]: W1129 07:03:46.599412 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81124877_aea7_4853_b4da_978dcf29d980.slice/crio-ef9786a1014fac680ff907ff0dcbd1b8ac431418553f01aaad3fa08277523548 WatchSource:0}: Error finding container ef9786a1014fac680ff907ff0dcbd1b8ac431418553f01aaad3fa08277523548: Status 404 returned error can't find the container with id ef9786a1014fac680ff907ff0dcbd1b8ac431418553f01aaad3fa08277523548 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.608030 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.656058 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xkj7\" (UniqueName: \"kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7\") pod \"redhat-operators-twkcr\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.721746 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.772025 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.890528 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:03:46 crc kubenswrapper[4828]: W1129 07:03:46.905334 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod097b513c_f25d_4a6d_9c88_90ac8f322a19.slice/crio-93a95d1ef35062b9a906135b8a205bf415137620181a57b590396c25467b2124 WatchSource:0}: Error finding container 93a95d1ef35062b9a906135b8a205bf415137620181a57b590396c25467b2124: Status 404 returned error can't find the container with id 93a95d1ef35062b9a906135b8a205bf415137620181a57b590396c25467b2124 Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.976315 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:46 crc kubenswrapper[4828]: I1129 07:03:46.983203 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bdxmg" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.027734 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.149086 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvnk2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.149443 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mvnk2" podUID="c52a7bb7-0f41-4457-a354-be5d25881767" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.149366 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvnk2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.149708 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvnk2" podUID="c52a7bb7-0f41-4457-a354-be5d25881767" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.163744 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerStarted","Data":"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.163790 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerStarted","Data":"18eec56362e747ca7afd0e8b91b82239e0083e8e32cd71e706178fee193bf888"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.180424 4828 generic.go:334] "Generic (PLEG): container finished" podID="81124877-aea7-4853-b4da-978dcf29d980" containerID="96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4" exitCode=0 Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.180523 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerDied","Data":"96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.180558 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerStarted","Data":"ef9786a1014fac680ff907ff0dcbd1b8ac431418553f01aaad3fa08277523548"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.186777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" event={"ID":"6d8629cd-6b91-47d6-be66-cc036042a6e8","Type":"ContainerStarted","Data":"485aaa70d156b987df89f6088454c9b00593305f40645b5bec5e47d1221b08e0"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.191932 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerStarted","Data":"0a5161f37193fe65fbbf6419e25819e5daad30b533db25d67a67af189a166d7c"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.194874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" event={"ID":"9d23e223-6e12-45ff-80b3-1e65d6c36960","Type":"ContainerStarted","Data":"16af16523b2e021d8e0ac669303d8baef3e00a8da46bde953f071a96f832c842"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.194909 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" event={"ID":"9d23e223-6e12-45ff-80b3-1e65d6c36960","Type":"ContainerStarted","Data":"119931424c2ef8abaad1b97730953b233a5bdfd2a34382b960ffe6c1a749ea2d"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.195455 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.197761 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerStarted","Data":"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.197788 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerStarted","Data":"93a95d1ef35062b9a906135b8a205bf415137620181a57b590396c25467b2124"} Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.249571 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x5w66" podStartSLOduration=13.249554261 podStartE2EDuration="13.249554261s" podCreationTimestamp="2025-11-29 07:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:47.249054238 +0000 UTC m=+166.871130306" watchObservedRunningTime="2025-11-29 07:03:47.249554261 +0000 UTC m=+166.871630319" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.267907 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" podStartSLOduration=141.267888083 podStartE2EDuration="2m21.267888083s" podCreationTimestamp="2025-11-29 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:03:47.265703537 +0000 UTC m=+166.887779605" watchObservedRunningTime="2025-11-29 07:03:47.267888083 +0000 UTC m=+166.889964141" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.387907 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.455861 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.456759 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.456762 4828 patch_prober.go:28] interesting pod/console-f9d7485db-9vbf7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.456869 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9vbf7" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.490106 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4km\" (UniqueName: \"kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km\") pod \"634b47b0-ce44-446c-8f87-531a593c576b\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.490341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume\") pod \"634b47b0-ce44-446c-8f87-531a593c576b\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.490387 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume\") pod \"634b47b0-ce44-446c-8f87-531a593c576b\" (UID: \"634b47b0-ce44-446c-8f87-531a593c576b\") " Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.491565 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume" (OuterVolumeSpecName: "config-volume") pod "634b47b0-ce44-446c-8f87-531a593c576b" (UID: "634b47b0-ce44-446c-8f87-531a593c576b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.495599 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "634b47b0-ce44-446c-8f87-531a593c576b" (UID: "634b47b0-ce44-446c-8f87-531a593c576b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.498399 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km" (OuterVolumeSpecName: "kube-api-access-jj4km") pod "634b47b0-ce44-446c-8f87-531a593c576b" (UID: "634b47b0-ce44-446c-8f87-531a593c576b"). InnerVolumeSpecName "kube-api-access-jj4km". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.559379 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.563664 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:47 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:47 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:47 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.563727 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.592556 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634b47b0-ce44-446c-8f87-531a593c576b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.592599 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634b47b0-ce44-446c-8f87-531a593c576b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.592610 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj4km\" (UniqueName: \"kubernetes.io/projected/634b47b0-ce44-446c-8f87-531a593c576b-kube-api-access-jj4km\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.624651 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlzkw" Nov 29 07:03:47 crc kubenswrapper[4828]: I1129 07:03:47.962366 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.214733 4828 generic.go:334] "Generic (PLEG): container finished" podID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerID="fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2" exitCode=0 Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.214897 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerDied","Data":"fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2"} Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.243113 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" event={"ID":"634b47b0-ce44-446c-8f87-531a593c576b","Type":"ContainerDied","Data":"259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157"} Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.243204 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259b9d08a69ad9f7843607626a4c8982390d7d0e96f8e110aa4be07531637157" Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.243127 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn" Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.252342 4828 generic.go:334] "Generic (PLEG): container finished" podID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerID="be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98" exitCode=0 Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.252636 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerDied","Data":"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98"} Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.256613 4828 generic.go:334] "Generic (PLEG): container finished" podID="35451e26-ec80-4e68-bf86-4f0990c394af" containerID="06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575" exitCode=0 Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.256863 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerDied","Data":"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575"} Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.563702 4828 patch_prober.go:28] interesting pod/router-default-5444994796-rmxsv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:03:48 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Nov 29 07:03:48 crc kubenswrapper[4828]: [+]process-running ok Nov 29 07:03:48 crc kubenswrapper[4828]: healthz check failed Nov 29 07:03:48 crc kubenswrapper[4828]: I1129 07:03:48.563782 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmxsv" podUID="fbc422bf-1668-470a-96a8-d94bbe3a2209" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.564977 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.571237 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rmxsv" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.945712 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:03:49 crc kubenswrapper[4828]: E1129 07:03:49.946029 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634b47b0-ce44-446c-8f87-531a593c576b" containerName="collect-profiles" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.946071 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="634b47b0-ce44-446c-8f87-531a593c576b" containerName="collect-profiles" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.946302 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="634b47b0-ce44-446c-8f87-531a593c576b" containerName="collect-profiles" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.947653 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.952520 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.960660 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:03:49 crc kubenswrapper[4828]: I1129 07:03:49.960948 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.041667 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.041777 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.143978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.144087 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.144248 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.187983 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.290750 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.359458 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.368083 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6581e2a-a98c-493d-8c8f-20c5b4c4b17c-metrics-certs\") pod \"network-metrics-daemon-4ffn6\" (UID: \"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c\") " pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.443920 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4ffn6" Nov 29 07:03:50 crc kubenswrapper[4828]: I1129 07:03:50.949450 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.026806 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.027804 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.049961 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.050235 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.058760 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.106447 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.106834 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.201336 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4ffn6"] Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.208694 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.208832 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.208823 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: W1129 07:03:51.246076 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6581e2a_a98c_493d_8c8f_20c5b4c4b17c.slice/crio-b7efa8eb7b46a09ebd642b51535c0f3cfdf7740cf4c9c9dec1ba6aa7e8513e49 WatchSource:0}: Error finding container b7efa8eb7b46a09ebd642b51535c0f3cfdf7740cf4c9c9dec1ba6aa7e8513e49: Status 404 returned error can't find the container with id b7efa8eb7b46a09ebd642b51535c0f3cfdf7740cf4c9c9dec1ba6aa7e8513e49 Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.265517 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.407806 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.497588 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" event={"ID":"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c","Type":"ContainerStarted","Data":"b7efa8eb7b46a09ebd642b51535c0f3cfdf7740cf4c9c9dec1ba6aa7e8513e49"} Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.502109 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad","Type":"ContainerStarted","Data":"35d1bbc03dd12d1f231c5f505c2cd7cbf5dac52861446b2c5dcac35eff052c8c"} Nov 29 07:03:51 crc kubenswrapper[4828]: I1129 07:03:51.824239 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:03:52 crc kubenswrapper[4828]: I1129 07:03:52.635840 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dc30c2f8-27e2-4703-93d6-796ba5fc355a","Type":"ContainerStarted","Data":"a94615272390f5a46d2bcf815844b3ac08cb2d131111dba5e30853eec8374d51"} Nov 29 07:03:52 crc kubenswrapper[4828]: I1129 07:03:52.639707 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad","Type":"ContainerStarted","Data":"d317dcaf78ae2aaf46e298129e939cd3379f0f2c75bbb6dc54e055ca9c819b38"} Nov 29 07:03:53 crc kubenswrapper[4828]: I1129 07:03:53.117121 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xpv8b" Nov 29 07:03:55 crc kubenswrapper[4828]: I1129 07:03:55.217238 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" containerID="d317dcaf78ae2aaf46e298129e939cd3379f0f2c75bbb6dc54e055ca9c819b38" exitCode=0 Nov 29 07:03:55 crc kubenswrapper[4828]: I1129 07:03:55.217318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad","Type":"ContainerDied","Data":"d317dcaf78ae2aaf46e298129e939cd3379f0f2c75bbb6dc54e055ca9c819b38"} Nov 29 07:03:55 crc kubenswrapper[4828]: I1129 07:03:55.222228 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dc30c2f8-27e2-4703-93d6-796ba5fc355a","Type":"ContainerStarted","Data":"4b5cd919843a6737808e27438a051ba06124076989a6a1ac85dd67ef96ddff2e"} Nov 29 07:03:55 crc kubenswrapper[4828]: I1129 07:03:55.224799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" event={"ID":"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c","Type":"ContainerStarted","Data":"b5f690873f11fe8eb11c24c2118ea963c34b011737a548dd94395d898ae1bbfd"} Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.235761 4828 generic.go:334] "Generic (PLEG): container finished" podID="dc30c2f8-27e2-4703-93d6-796ba5fc355a" containerID="4b5cd919843a6737808e27438a051ba06124076989a6a1ac85dd67ef96ddff2e" exitCode=0 Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.235805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dc30c2f8-27e2-4703-93d6-796ba5fc355a","Type":"ContainerDied","Data":"4b5cd919843a6737808e27438a051ba06124076989a6a1ac85dd67ef96ddff2e"} Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.242286 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rwgkq_580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8/cluster-samples-operator/0.log" Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.242339 4828 generic.go:334] "Generic (PLEG): container finished" podID="580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8" containerID="4fecf197028ce57e06fde76dd2882ba807d8a68d6e07ba631a3ebfd60845bc4c" exitCode=2 Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.242408 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" event={"ID":"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8","Type":"ContainerDied","Data":"4fecf197028ce57e06fde76dd2882ba807d8a68d6e07ba631a3ebfd60845bc4c"} Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.242822 4828 scope.go:117] "RemoveContainer" containerID="4fecf197028ce57e06fde76dd2882ba807d8a68d6e07ba631a3ebfd60845bc4c" Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.823142 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.929557 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access\") pod \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.929727 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir\") pod \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\" (UID: \"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad\") " Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.930107 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" (UID: "3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:03:56 crc kubenswrapper[4828]: I1129 07:03:56.942926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" (UID: "3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.032088 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.032141 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.161302 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mvnk2" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.249388 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.249470 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad","Type":"ContainerDied","Data":"35d1bbc03dd12d1f231c5f505c2cd7cbf5dac52861446b2c5dcac35eff052c8c"} Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.249502 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d1bbc03dd12d1f231c5f505c2cd7cbf5dac52861446b2c5dcac35eff052c8c" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.686376 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.745203 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir\") pod \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.745353 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc30c2f8-27e2-4703-93d6-796ba5fc355a" (UID: "dc30c2f8-27e2-4703-93d6-796ba5fc355a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.745398 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access\") pod \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\" (UID: \"dc30c2f8-27e2-4703-93d6-796ba5fc355a\") " Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.745742 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.748620 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc30c2f8-27e2-4703-93d6-796ba5fc355a" (UID: "dc30c2f8-27e2-4703-93d6-796ba5fc355a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:03:57 crc kubenswrapper[4828]: I1129 07:03:57.846493 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc30c2f8-27e2-4703-93d6-796ba5fc355a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:58 crc kubenswrapper[4828]: I1129 07:03:58.279754 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dc30c2f8-27e2-4703-93d6-796ba5fc355a","Type":"ContainerDied","Data":"a94615272390f5a46d2bcf815844b3ac08cb2d131111dba5e30853eec8374d51"} Nov 29 07:03:58 crc kubenswrapper[4828]: I1129 07:03:58.279820 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94615272390f5a46d2bcf815844b3ac08cb2d131111dba5e30853eec8374d51" Nov 29 07:03:58 crc kubenswrapper[4828]: I1129 07:03:58.279837 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:03:58 crc kubenswrapper[4828]: I1129 07:03:58.630566 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:03:58 crc kubenswrapper[4828]: I1129 07:03:58.635420 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:04:00 crc kubenswrapper[4828]: I1129 07:04:00.291387 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4ffn6" event={"ID":"f6581e2a-a98c-493d-8c8f-20c5b4c4b17c","Type":"ContainerStarted","Data":"31d18ed7cab4316d29e7234a60968b8be2a5bd14b47180ae69866ae2caf48d05"} Nov 29 07:04:01 crc kubenswrapper[4828]: I1129 07:04:01.307469 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rwgkq_580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8/cluster-samples-operator/0.log" Nov 29 07:04:01 crc kubenswrapper[4828]: I1129 07:04:01.307516 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rwgkq" event={"ID":"580ebf5d-d05c-49ab-9a9a-6d24a1bd9ee8","Type":"ContainerStarted","Data":"26f4981d0da5783ad7edfabb71b257f4b5d02d491a6264e502efe077c92677f1"} Nov 29 07:04:03 crc kubenswrapper[4828]: I1129 07:04:03.338175 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4ffn6" podStartSLOduration=158.33815281 podStartE2EDuration="2m38.33815281s" podCreationTimestamp="2025-11-29 07:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:04:03.332549396 +0000 UTC m=+182.954625454" watchObservedRunningTime="2025-11-29 07:04:03.33815281 +0000 UTC m=+182.960228868" Nov 29 07:04:05 crc kubenswrapper[4828]: I1129 07:04:05.350854 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:04:11 crc kubenswrapper[4828]: I1129 07:04:11.486706 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:04:11 crc kubenswrapper[4828]: I1129 07:04:11.487036 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:04:17 crc kubenswrapper[4828]: I1129 07:04:17.649369 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4njtf" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.139956 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:04:23 crc kubenswrapper[4828]: E1129 07:04:23.140739 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.140777 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: E1129 07:04:23.140815 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc30c2f8-27e2-4703-93d6-796ba5fc355a" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.140824 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc30c2f8-27e2-4703-93d6-796ba5fc355a" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.141016 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc30c2f8-27e2-4703-93d6-796ba5fc355a" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.141032 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca5c2ae-9197-4dfa-9e6e-48bb20e384ad" containerName="pruner" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.141615 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.146130 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.151531 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.154197 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.229956 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.230029 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.331107 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.331169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.331289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.351001 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:23 crc kubenswrapper[4828]: I1129 07:04:23.464072 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.536041 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.537123 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.548332 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.600726 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.601068 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.601301 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.702090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.702186 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.702257 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.702285 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.702532 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.722252 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access\") pod \"installer-9-crc\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:27 crc kubenswrapper[4828]: I1129 07:04:27.875723 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:04:41 crc kubenswrapper[4828]: I1129 07:04:41.486948 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:04:41 crc kubenswrapper[4828]: I1129 07:04:41.487554 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:04:41 crc kubenswrapper[4828]: I1129 07:04:41.487611 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:04:41 crc kubenswrapper[4828]: I1129 07:04:41.488309 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:04:41 crc kubenswrapper[4828]: I1129 07:04:41.488492 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be" gracePeriod=600 Nov 29 07:04:56 crc kubenswrapper[4828]: I1129 07:04:56.696834 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be" exitCode=0 Nov 29 07:04:56 crc kubenswrapper[4828]: I1129 07:04:56.791778 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be"} Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.764794 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.766294 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2vnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gh2x8_openshift-marketplace(35451e26-ec80-4e68-bf86-4f0990c394af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.768762 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gh2x8" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.826642 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.827063 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vktx7_openshift-marketplace(81124877-aea7-4853-b4da-978dcf29d980): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:01 crc kubenswrapper[4828]: E1129 07:05:01.828219 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vktx7" podUID="81124877-aea7-4853-b4da-978dcf29d980" Nov 29 07:05:04 crc kubenswrapper[4828]: E1129 07:05:04.671898 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vktx7" podUID="81124877-aea7-4853-b4da-978dcf29d980" Nov 29 07:05:04 crc kubenswrapper[4828]: E1129 07:05:04.673157 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gh2x8" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" Nov 29 07:05:06 crc kubenswrapper[4828]: E1129 07:05:06.183392 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:05:06 crc kubenswrapper[4828]: E1129 07:05:06.183617 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm555,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-db4cv_openshift-marketplace(0a44e830-89c8-428e-ab90-d8936c069de4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:06 crc kubenswrapper[4828]: E1129 07:05:06.185628 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-db4cv" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" Nov 29 07:05:07 crc kubenswrapper[4828]: E1129 07:05:07.803206 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-db4cv" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" Nov 29 07:05:08 crc kubenswrapper[4828]: E1129 07:05:08.555164 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:05:08 crc kubenswrapper[4828]: E1129 07:05:08.555346 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7ts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vpwkr_openshift-marketplace(eccbf47b-47fe-4980-b09b-cde621bb188a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:08 crc kubenswrapper[4828]: E1129 07:05:08.556518 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vpwkr" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.410016 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vpwkr" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.469590 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.469759 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5q9v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b2qvr_openshift-marketplace(1c5bb383-f3ed-43cd-b62c-38d3e2922f11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.470964 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b2qvr" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.610595 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.611024 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znxdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r5hqw_openshift-marketplace(097b513c-f25d-4a6d-9c88-90ac8f322a19): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.612470 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r5hqw" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" Nov 29 07:05:11 crc kubenswrapper[4828]: I1129 07:05:11.652114 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.672589 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.672731 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47mv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jxws9_openshift-marketplace(9a9da14c-b652-4eca-bf03-8eedf90d40fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.674394 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jxws9" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" Nov 29 07:05:11 crc kubenswrapper[4828]: I1129 07:05:11.731545 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:05:11 crc kubenswrapper[4828]: W1129 07:05:11.743115 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod88aba7cf_dd10_469c_aea3_11ea4f6b6a01.slice/crio-8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9 WatchSource:0}: Error finding container 8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9: Status 404 returned error can't find the container with id 8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9 Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.876711 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.876895 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xkj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-twkcr_openshift-marketplace(edc8363b-0cee-48b5-b568-8a694fdc91eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.878013 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-twkcr" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" Nov 29 07:05:11 crc kubenswrapper[4828]: I1129 07:05:11.880486 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88aba7cf-dd10-469c-aea3-11ea4f6b6a01","Type":"ContainerStarted","Data":"8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9"} Nov 29 07:05:11 crc kubenswrapper[4828]: I1129 07:05:11.885814 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93"} Nov 29 07:05:11 crc kubenswrapper[4828]: I1129 07:05:11.886927 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73fd3fc4-4f2d-464f-8fd1-766389f42933","Type":"ContainerStarted","Data":"fc5fca1cef18e3674f427c585825b4d7cd5aab093b7a1fb6d3e11336802dc0c9"} Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.888306 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jxws9" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.889101 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r5hqw" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" Nov 29 07:05:11 crc kubenswrapper[4828]: E1129 07:05:11.889378 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b2qvr" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" Nov 29 07:05:12 crc kubenswrapper[4828]: I1129 07:05:12.892921 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88aba7cf-dd10-469c-aea3-11ea4f6b6a01","Type":"ContainerStarted","Data":"1e3482ebe0278d4a99c2e9aea456d7ce4efbf05ebff9846842c41f0cd72edc64"} Nov 29 07:05:12 crc kubenswrapper[4828]: I1129 07:05:12.894886 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73fd3fc4-4f2d-464f-8fd1-766389f42933","Type":"ContainerStarted","Data":"2bd73bbfae3f7da3abcdac5bfd6ef5358c24abefd5b59c1db1a0ea10e1942d5d"} Nov 29 07:05:12 crc kubenswrapper[4828]: E1129 07:05:12.897072 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-twkcr" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" Nov 29 07:05:12 crc kubenswrapper[4828]: I1129 07:05:12.913488 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=45.913446399 podStartE2EDuration="45.913446399s" podCreationTimestamp="2025-11-29 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:05:12.906471459 +0000 UTC m=+252.528547517" watchObservedRunningTime="2025-11-29 07:05:12.913446399 +0000 UTC m=+252.535522457" Nov 29 07:05:12 crc kubenswrapper[4828]: I1129 07:05:12.946031 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=49.946006978 podStartE2EDuration="49.946006978s" podCreationTimestamp="2025-11-29 07:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:05:12.928350203 +0000 UTC m=+252.550426281" watchObservedRunningTime="2025-11-29 07:05:12.946006978 +0000 UTC m=+252.568083036" Nov 29 07:05:13 crc kubenswrapper[4828]: I1129 07:05:13.901557 4828 generic.go:334] "Generic (PLEG): container finished" podID="73fd3fc4-4f2d-464f-8fd1-766389f42933" containerID="2bd73bbfae3f7da3abcdac5bfd6ef5358c24abefd5b59c1db1a0ea10e1942d5d" exitCode=0 Nov 29 07:05:13 crc kubenswrapper[4828]: I1129 07:05:13.901619 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73fd3fc4-4f2d-464f-8fd1-766389f42933","Type":"ContainerDied","Data":"2bd73bbfae3f7da3abcdac5bfd6ef5358c24abefd5b59c1db1a0ea10e1942d5d"} Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.111043 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.199468 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access\") pod \"73fd3fc4-4f2d-464f-8fd1-766389f42933\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.199567 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir\") pod \"73fd3fc4-4f2d-464f-8fd1-766389f42933\" (UID: \"73fd3fc4-4f2d-464f-8fd1-766389f42933\") " Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.199899 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73fd3fc4-4f2d-464f-8fd1-766389f42933" (UID: "73fd3fc4-4f2d-464f-8fd1-766389f42933"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.206441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73fd3fc4-4f2d-464f-8fd1-766389f42933" (UID: "73fd3fc4-4f2d-464f-8fd1-766389f42933"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.301461 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd3fc4-4f2d-464f-8fd1-766389f42933-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.301498 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73fd3fc4-4f2d-464f-8fd1-766389f42933-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.913328 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73fd3fc4-4f2d-464f-8fd1-766389f42933","Type":"ContainerDied","Data":"fc5fca1cef18e3674f427c585825b4d7cd5aab093b7a1fb6d3e11336802dc0c9"} Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.913383 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5fca1cef18e3674f427c585825b4d7cd5aab093b7a1fb6d3e11336802dc0c9" Nov 29 07:05:15 crc kubenswrapper[4828]: I1129 07:05:15.913932 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:05:16 crc kubenswrapper[4828]: I1129 07:05:16.922207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerStarted","Data":"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28"} Nov 29 07:05:17 crc kubenswrapper[4828]: I1129 07:05:17.479315 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:05:17 crc kubenswrapper[4828]: I1129 07:05:17.932974 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerDied","Data":"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28"} Nov 29 07:05:17 crc kubenswrapper[4828]: I1129 07:05:17.932882 4828 generic.go:334] "Generic (PLEG): container finished" podID="81124877-aea7-4853-b4da-978dcf29d980" containerID="0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28" exitCode=0 Nov 29 07:05:20 crc kubenswrapper[4828]: I1129 07:05:20.973471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerStarted","Data":"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd"} Nov 29 07:05:25 crc kubenswrapper[4828]: I1129 07:05:25.022109 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vktx7" podStartSLOduration=8.498630787 podStartE2EDuration="1m40.022070104s" podCreationTimestamp="2025-11-29 07:03:45 +0000 UTC" firstStartedPulling="2025-11-29 07:03:47.182957324 +0000 UTC m=+166.805033382" lastFinishedPulling="2025-11-29 07:05:18.706396641 +0000 UTC m=+258.328472699" observedRunningTime="2025-11-29 07:05:25.017741809 +0000 UTC m=+264.639817877" watchObservedRunningTime="2025-11-29 07:05:25.022070104 +0000 UTC m=+264.644146162" Nov 29 07:05:25 crc kubenswrapper[4828]: I1129 07:05:25.774380 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:05:25 crc kubenswrapper[4828]: I1129 07:05:25.774469 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:05:26 crc kubenswrapper[4828]: I1129 07:05:26.033902 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:05:27 crc kubenswrapper[4828]: I1129 07:05:27.060780 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.266568 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.266999 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.269130 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.269408 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.277914 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.288892 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.370040 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.370241 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.372372 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.382500 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.395956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.396205 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.531942 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.542597 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:05:35 crc kubenswrapper[4828]: I1129 07:05:35.549017 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:05:42 crc kubenswrapper[4828]: I1129 07:05:42.508733 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" podUID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" containerName="oauth-openshift" containerID="cri-o://a646a41c0f1ca52e9e9c9e4c7ea2710d12ba102ae629881ec6a8e6f4ac0fef28" gracePeriod=15 Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.106074 4828 generic.go:334] "Generic (PLEG): container finished" podID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" containerID="a646a41c0f1ca52e9e9c9e4c7ea2710d12ba102ae629881ec6a8e6f4ac0fef28" exitCode=0 Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.106172 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" event={"ID":"03f7edb8-ded1-483c-81d1-d75417a3dbdc","Type":"ContainerDied","Data":"a646a41c0f1ca52e9e9c9e4c7ea2710d12ba102ae629881ec6a8e6f4ac0fef28"} Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.309944 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.366685 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7448d7568b-bw472"] Nov 29 07:05:43 crc kubenswrapper[4828]: E1129 07:05:43.367034 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fd3fc4-4f2d-464f-8fd1-766389f42933" containerName="pruner" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.367059 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fd3fc4-4f2d-464f-8fd1-766389f42933" containerName="pruner" Nov 29 07:05:43 crc kubenswrapper[4828]: E1129 07:05:43.367096 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" containerName="oauth-openshift" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.367103 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" containerName="oauth-openshift" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.367244 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fd3fc4-4f2d-464f-8fd1-766389f42933" containerName="pruner" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.367262 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" containerName="oauth-openshift" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.369142 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.374579 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7448d7568b-bw472"] Nov 29 07:05:43 crc kubenswrapper[4828]: W1129 07:05:43.457681 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0384427ca2ceceae6b4ce80888c2c1fd22a9a9241445f5dfdbbe88f3e3dd5ab8 WatchSource:0}: Error finding container 0384427ca2ceceae6b4ce80888c2c1fd22a9a9241445f5dfdbbe88f3e3dd5ab8: Status 404 returned error can't find the container with id 0384427ca2ceceae6b4ce80888c2c1fd22a9a9241445f5dfdbbe88f3e3dd5ab8 Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473262 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473345 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dltjh\" (UniqueName: \"kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473406 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473454 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473481 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473502 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473525 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473571 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473605 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473662 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473685 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473724 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473776 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473809 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir\") pod \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\" (UID: \"03f7edb8-ded1-483c-81d1-d75417a3dbdc\") " Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.473981 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-session\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474018 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-policies\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474060 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474094 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474154 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474185 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-dir\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474214 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474254 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-login\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474304 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474334 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6lk\" (UniqueName: \"kubernetes.io/projected/1745312e-4c50-4b6b-8b86-2716008b6dd2-kube-api-access-fg6lk\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474357 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-error\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474482 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.475349 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.475383 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.475744 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.474448 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.478711 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.478735 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.479531 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.479550 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.479564 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.479584 4828 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f7edb8-ded1-483c-81d1-d75417a3dbdc-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.479595 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.493029 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh" (OuterVolumeSpecName: "kube-api-access-dltjh") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "kube-api-access-dltjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.493368 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.494187 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.495970 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.497878 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.500276 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.500323 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.505460 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.556442 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03f7edb8-ded1-483c-81d1-d75417a3dbdc" (UID: "03f7edb8-ded1-483c-81d1-d75417a3dbdc"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580457 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580521 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6lk\" (UniqueName: \"kubernetes.io/projected/1745312e-4c50-4b6b-8b86-2716008b6dd2-kube-api-access-fg6lk\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580560 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580585 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-error\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580663 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580691 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-session\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580722 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-policies\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580760 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580788 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580813 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580849 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-dir\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580879 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.580975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-login\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581065 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581082 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581096 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581108 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dltjh\" (UniqueName: \"kubernetes.io/projected/03f7edb8-ded1-483c-81d1-d75417a3dbdc-kube-api-access-dltjh\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581119 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581132 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581144 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581155 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.581166 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03f7edb8-ded1-483c-81d1-d75417a3dbdc-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.583737 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-policies\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.583856 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-service-ca\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.584189 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.584326 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1745312e-4c50-4b6b-8b86-2716008b6dd2-audit-dir\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.585358 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-login\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.589923 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-error\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.589986 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.590011 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.590025 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.590125 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-session\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.590479 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.590672 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.591857 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1745312e-4c50-4b6b-8b86-2716008b6dd2-v4-0-config-system-router-certs\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.614589 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6lk\" (UniqueName: \"kubernetes.io/projected/1745312e-4c50-4b6b-8b86-2716008b6dd2-kube-api-access-fg6lk\") pod \"oauth-openshift-7448d7568b-bw472\" (UID: \"1745312e-4c50-4b6b-8b86-2716008b6dd2\") " pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.692980 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:43 crc kubenswrapper[4828]: I1129 07:05:43.913184 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7448d7568b-bw472"] Nov 29 07:05:43 crc kubenswrapper[4828]: W1129 07:05:43.921075 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1745312e_4c50_4b6b_8b86_2716008b6dd2.slice/crio-10fde8f2ab8b70dbc4b09623d9dc1eee3b389faa976d29ce7587471659ea51bb WatchSource:0}: Error finding container 10fde8f2ab8b70dbc4b09623d9dc1eee3b389faa976d29ce7587471659ea51bb: Status 404 returned error can't find the container with id 10fde8f2ab8b70dbc4b09623d9dc1eee3b389faa976d29ce7587471659ea51bb Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.119813 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerStarted","Data":"6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.121589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerStarted","Data":"f201724c66c0747f2ceee1084b859440246c6f425417ffea800c5811eeb82568"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.123159 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" event={"ID":"03f7edb8-ded1-483c-81d1-d75417a3dbdc","Type":"ContainerDied","Data":"ff99c3ae1bb4ca773018b0ad5272e03bab4e0ad94227af2c28a272a2bea3bdd9"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.123237 4828 scope.go:117] "RemoveContainer" containerID="a646a41c0f1ca52e9e9c9e4c7ea2710d12ba102ae629881ec6a8e6f4ac0fef28" Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.123453 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfq6k" Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.133523 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c9a32b57f4fd317bf9f6885fbd4081a851f719833de016d801b47f2736b86b14"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.133569 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0384427ca2ceceae6b4ce80888c2c1fd22a9a9241445f5dfdbbe88f3e3dd5ab8"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.135169 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" event={"ID":"1745312e-4c50-4b6b-8b86-2716008b6dd2","Type":"ContainerStarted","Data":"10fde8f2ab8b70dbc4b09623d9dc1eee3b389faa976d29ce7587471659ea51bb"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.140187 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerStarted","Data":"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.143069 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerStarted","Data":"ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.147492 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerStarted","Data":"85883e2b2db1183483f9441f959fa5535d40dcd157df6fc4cade711a6480a875"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.149219 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerStarted","Data":"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.151605 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerStarted","Data":"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.153664 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"588a401f32141d19f97586223c0f58be8724886b59cdeddeaa58743438afc168"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.153827 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"097a0c71ce60d1a7c21b2c5a232629da81fdb43430a3a966473a3b1f9b75260b"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.158660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"94ac066dcf4423a9813687bf909324f2988d129c3163d56cb718515d94d9c1bf"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.158854 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b36880b5710140422eb7ae5f72a28bf798645a9412bda3de6ed0d0882260d067"} Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.175486 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:05:44 crc kubenswrapper[4828]: I1129 07:05:44.200134 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfq6k"] Nov 29 07:05:44 crc kubenswrapper[4828]: E1129 07:05:44.615088 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5bb383_f3ed_43cd_b62c_38d3e2922f11.slice/crio-ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccbf47b_47fe_4980_b09b_cde621bb188a.slice/crio-6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a44e830_89c8_428e_ab90_d8936c069de4.slice/crio-conmon-f201724c66c0747f2ceee1084b859440246c6f425417ffea800c5811eeb82568.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccbf47b_47fe_4980_b09b_cde621bb188a.slice/crio-conmon-6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5bb383_f3ed_43cd_b62c_38d3e2922f11.slice/crio-conmon-ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.175183 4828 generic.go:334] "Generic (PLEG): container finished" podID="0a44e830-89c8-428e-ab90-d8936c069de4" containerID="f201724c66c0747f2ceee1084b859440246c6f425417ffea800c5811eeb82568" exitCode=0 Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.175281 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerDied","Data":"f201724c66c0747f2ceee1084b859440246c6f425417ffea800c5811eeb82568"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.177865 4828 generic.go:334] "Generic (PLEG): container finished" podID="35451e26-ec80-4e68-bf86-4f0990c394af" containerID="6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520" exitCode=0 Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.177946 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerDied","Data":"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.186919 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" event={"ID":"1745312e-4c50-4b6b-8b86-2716008b6dd2","Type":"ContainerStarted","Data":"5133ef40f943f13b88693eebe25fb2df7ae79783c559e9363fb40406ffea98df"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.187204 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.194146 4828 generic.go:334] "Generic (PLEG): container finished" podID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerID="6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c" exitCode=0 Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.194245 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerDied","Data":"6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.197988 4828 generic.go:334] "Generic (PLEG): container finished" podID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerID="ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba" exitCode=0 Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.198157 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerDied","Data":"ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.205228 4828 generic.go:334] "Generic (PLEG): container finished" podID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerID="85883e2b2db1183483f9441f959fa5535d40dcd157df6fc4cade711a6480a875" exitCode=0 Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.206785 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerDied","Data":"85883e2b2db1183483f9441f959fa5535d40dcd157df6fc4cade711a6480a875"} Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.372898 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" podStartSLOduration=28.372861941 podStartE2EDuration="28.372861941s" podCreationTimestamp="2025-11-29 07:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:05:45.371296362 +0000 UTC m=+284.993372430" watchObservedRunningTime="2025-11-29 07:05:45.372861941 +0000 UTC m=+284.994937999" Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.392285 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7448d7568b-bw472" Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.423710 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f7edb8-ded1-483c-81d1-d75417a3dbdc" path="/var/lib/kubelet/pods/03f7edb8-ded1-483c-81d1-d75417a3dbdc/volumes" Nov 29 07:05:45 crc kubenswrapper[4828]: I1129 07:05:45.543390 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:05:46 crc kubenswrapper[4828]: I1129 07:05:46.222910 4828 generic.go:334] "Generic (PLEG): container finished" podID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerID="835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e" exitCode=0 Nov 29 07:05:46 crc kubenswrapper[4828]: I1129 07:05:46.222967 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerDied","Data":"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.301105 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerStarted","Data":"c07b99bff05677b5b93955bed8db2dc66bb41624e9a2b9117367a8456f26b09a"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.303904 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerStarted","Data":"4a5f781db81fdabc05e7b02dbead37353877ef046a552098f8548afb684ad85b"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.306928 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerStarted","Data":"fd4789150dfa94299901fcf6cb0d91e7485402c5760cd86a7674971fbf200b37"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.308971 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerStarted","Data":"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.313896 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerStarted","Data":"7f71177be44eb6dafe0c04817046796d8c8193abac5819ba60da7ed4991c7f8a"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.317156 4828 generic.go:334] "Generic (PLEG): container finished" podID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerID="72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2" exitCode=0 Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.317745 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerDied","Data":"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2"} Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.339793 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpwkr" podStartSLOduration=5.337301912 podStartE2EDuration="2m5.339752977s" podCreationTimestamp="2025-11-29 07:03:42 +0000 UTC" firstStartedPulling="2025-11-29 07:03:46.178139683 +0000 UTC m=+165.800215741" lastFinishedPulling="2025-11-29 07:05:46.180590748 +0000 UTC m=+285.802666806" observedRunningTime="2025-11-29 07:05:47.339578531 +0000 UTC m=+286.961654619" watchObservedRunningTime="2025-11-29 07:05:47.339752977 +0000 UTC m=+286.961829035" Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.359798 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gh2x8" podStartSLOduration=4.68419212 podStartE2EDuration="2m2.359776391s" podCreationTimestamp="2025-11-29 07:03:45 +0000 UTC" firstStartedPulling="2025-11-29 07:03:48.262005877 +0000 UTC m=+167.884081935" lastFinishedPulling="2025-11-29 07:05:45.937590148 +0000 UTC m=+285.559666206" observedRunningTime="2025-11-29 07:05:47.357527741 +0000 UTC m=+286.979603799" watchObservedRunningTime="2025-11-29 07:05:47.359776391 +0000 UTC m=+286.981852459" Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.383454 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxws9" podStartSLOduration=4.368226342 podStartE2EDuration="2m4.383432699s" podCreationTimestamp="2025-11-29 07:03:43 +0000 UTC" firstStartedPulling="2025-11-29 07:03:46.068130667 +0000 UTC m=+165.690206725" lastFinishedPulling="2025-11-29 07:05:46.083337024 +0000 UTC m=+285.705413082" observedRunningTime="2025-11-29 07:05:47.376159283 +0000 UTC m=+286.998235341" watchObservedRunningTime="2025-11-29 07:05:47.383432699 +0000 UTC m=+287.005508757" Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.396435 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-db4cv" podStartSLOduration=4.383247472 podStartE2EDuration="2m4.396408814s" podCreationTimestamp="2025-11-29 07:03:43 +0000 UTC" firstStartedPulling="2025-11-29 07:03:46.084496089 +0000 UTC m=+165.706572187" lastFinishedPulling="2025-11-29 07:05:46.097657481 +0000 UTC m=+285.719733529" observedRunningTime="2025-11-29 07:05:47.394143804 +0000 UTC m=+287.016219872" watchObservedRunningTime="2025-11-29 07:05:47.396408814 +0000 UTC m=+287.018484872" Nov 29 07:05:47 crc kubenswrapper[4828]: I1129 07:05:47.436610 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b2qvr" podStartSLOduration=4.488665254 podStartE2EDuration="2m4.436588568s" podCreationTimestamp="2025-11-29 07:03:43 +0000 UTC" firstStartedPulling="2025-11-29 07:03:46.047055774 +0000 UTC m=+165.669131832" lastFinishedPulling="2025-11-29 07:05:45.994979088 +0000 UTC m=+285.617055146" observedRunningTime="2025-11-29 07:05:47.433066158 +0000 UTC m=+287.055142246" watchObservedRunningTime="2025-11-29 07:05:47.436588568 +0000 UTC m=+287.058664626" Nov 29 07:05:48 crc kubenswrapper[4828]: I1129 07:05:48.326811 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerStarted","Data":"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293"} Nov 29 07:05:48 crc kubenswrapper[4828]: I1129 07:05:48.330778 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerStarted","Data":"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015"} Nov 29 07:05:48 crc kubenswrapper[4828]: I1129 07:05:48.356039 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r5hqw" podStartSLOduration=2.838809994 podStartE2EDuration="2m2.356021669s" podCreationTimestamp="2025-11-29 07:03:46 +0000 UTC" firstStartedPulling="2025-11-29 07:03:48.25940965 +0000 UTC m=+167.881485708" lastFinishedPulling="2025-11-29 07:05:47.776621325 +0000 UTC m=+287.398697383" observedRunningTime="2025-11-29 07:05:48.355566485 +0000 UTC m=+287.977642543" watchObservedRunningTime="2025-11-29 07:05:48.356021669 +0000 UTC m=+287.978097727" Nov 29 07:05:48 crc kubenswrapper[4828]: I1129 07:05:48.377903 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twkcr" podStartSLOduration=3.777422025 podStartE2EDuration="2m2.377885431s" podCreationTimestamp="2025-11-29 07:03:46 +0000 UTC" firstStartedPulling="2025-11-29 07:03:48.218761683 +0000 UTC m=+167.840837741" lastFinishedPulling="2025-11-29 07:05:46.819225089 +0000 UTC m=+286.441301147" observedRunningTime="2025-11-29 07:05:48.373395151 +0000 UTC m=+287.995471209" watchObservedRunningTime="2025-11-29 07:05:48.377885431 +0000 UTC m=+287.999961489" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.676221 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.676994 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677306 4828 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677602 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677732 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" gracePeriod=15 Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677671 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" gracePeriod=15 Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677751 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" gracePeriod=15 Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677656 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3" gracePeriod=15 Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.677828 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" gracePeriod=15 Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678190 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678257 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678341 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678400 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678463 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678521 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678588 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678642 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678700 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678760 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678819 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.678876 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.678938 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.679004 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.679298 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.679381 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.679444 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.680307 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.680408 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.680530 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.680643 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:05:49 crc kubenswrapper[4828]: E1129 07:05:49.680862 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.683819 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.714017 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747194 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747254 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747304 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747328 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747353 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747402 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747434 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.747476 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916018 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916075 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916103 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916125 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916179 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916241 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916292 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916376 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916422 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916451 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916477 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916505 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916529 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916560 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:49 crc kubenswrapper[4828]: I1129 07:05:49.916588 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:50 crc kubenswrapper[4828]: I1129 07:05:50.010285 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:05:50 crc kubenswrapper[4828]: W1129 07:05:50.035907 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-7e246579b33699aeecce085367b5050c12178acafd1d9066db0085d09110d883 WatchSource:0}: Error finding container 7e246579b33699aeecce085367b5050c12178acafd1d9066db0085d09110d883: Status 404 returned error can't find the container with id 7e246579b33699aeecce085367b5050c12178acafd1d9066db0085d09110d883 Nov 29 07:05:50 crc kubenswrapper[4828]: E1129 07:05:50.040053 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c686807cc518f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,LastTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:05:50 crc kubenswrapper[4828]: I1129 07:05:50.352141 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7e246579b33699aeecce085367b5050c12178acafd1d9066db0085d09110d883"} Nov 29 07:05:51 crc kubenswrapper[4828]: I1129 07:05:51.362119 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:05:51 crc kubenswrapper[4828]: I1129 07:05:51.363600 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:05:51 crc kubenswrapper[4828]: I1129 07:05:51.364285 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" exitCode=2 Nov 29 07:05:51 crc kubenswrapper[4828]: I1129 07:05:51.414004 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:51 crc kubenswrapper[4828]: I1129 07:05:51.414293 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:52 crc kubenswrapper[4828]: E1129 07:05:52.829693 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c686807cc518f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,LastTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.510383 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.510432 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.550314 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.553445 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.554006 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.619809 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.619876 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.663411 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.664146 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.664704 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.665249 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.724696 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.724765 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.761893 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.762574 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.763075 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.763555 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.763775 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.848666 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.848746 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.889854 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.890773 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.891052 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.891437 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.891902 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:53 crc kubenswrapper[4828]: I1129 07:05:53.892143 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:55 crc kubenswrapper[4828]: I1129 07:05:55.985042 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:05:55 crc kubenswrapper[4828]: I1129 07:05:55.985703 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.028729 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.029396 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.030024 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.030584 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.031013 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.031308 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.031603 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.499953 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.500006 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.544347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.544993 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.545604 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.546295 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.546656 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.547019 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.547508 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.547778 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.773588 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.773923 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.815380 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.816588 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.817180 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.817809 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.818417 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.818985 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.819594 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.820137 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:56 crc kubenswrapper[4828]: I1129 07:05:56.820435 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.271073 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.271351 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.271577 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.271743 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.271903 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:57 crc kubenswrapper[4828]: I1129 07:05:57.271930 4828 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.272213 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="200ms" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.473562 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="400ms" Nov 29 07:05:57 crc kubenswrapper[4828]: E1129 07:05:57.874855 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="800ms" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.099734 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.101138 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.102250 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.103102 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.103498 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.104001 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.104492 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.104767 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.105061 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.105336 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.105615 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.105831 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.134113 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.135858 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.137183 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" exitCode=0 Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.137221 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" exitCode=0 Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.137232 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" exitCode=0 Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.138177 4828 scope.go:117] "RemoveContainer" containerID="dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.163160 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.180893 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.181490 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.181858 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.182135 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.182443 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.182673 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.182903 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.183144 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.183389 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.183710 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.185927 4828 scope.go:117] "RemoveContainer" containerID="69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.192164 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.192223 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.192855 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.193349 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.193629 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.193860 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.194080 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.194166 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.194573 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.194819 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.195333 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.195561 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.195834 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.196082 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.196376 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.196583 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.198125 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.198422 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.198701 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.199862 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.200378 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.204869 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.204925 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205012 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205127 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205134 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205192 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205796 4828 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205827 4828 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.205838 4828 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.207136 4828 scope.go:117] "RemoveContainer" containerID="0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.219063 4828 scope.go:117] "RemoveContainer" containerID="ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.235001 4828 scope.go:117] "RemoveContainer" containerID="cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.252689 4828 scope.go:117] "RemoveContainer" containerID="5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.282134 4828 scope.go:117] "RemoveContainer" containerID="dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.284187 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\": container with ID starting with dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d not found: ID does not exist" containerID="dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.284252 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d"} err="failed to get container status \"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\": rpc error: code = NotFound desc = could not find container \"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\": container with ID starting with dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.284305 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.284649 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\": container with ID starting with 63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe not found: ID does not exist" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.284672 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe"} err="failed to get container status \"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\": rpc error: code = NotFound desc = could not find container \"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\": container with ID starting with 63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.284693 4828 scope.go:117] "RemoveContainer" containerID="69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.285051 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\": container with ID starting with 69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad not found: ID does not exist" containerID="69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.285078 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad"} err="failed to get container status \"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\": rpc error: code = NotFound desc = could not find container \"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\": container with ID starting with 69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.285096 4828 scope.go:117] "RemoveContainer" containerID="0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.287552 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\": container with ID starting with 0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20 not found: ID does not exist" containerID="0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.287865 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20"} err="failed to get container status \"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\": rpc error: code = NotFound desc = could not find container \"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\": container with ID starting with 0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.287907 4828 scope.go:117] "RemoveContainer" containerID="ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.288401 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\": container with ID starting with ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5 not found: ID does not exist" containerID="ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.288427 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5"} err="failed to get container status \"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\": rpc error: code = NotFound desc = could not find container \"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\": container with ID starting with ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.288445 4828 scope.go:117] "RemoveContainer" containerID="cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.288826 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\": container with ID starting with cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3 not found: ID does not exist" containerID="cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.288884 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3"} err="failed to get container status \"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\": rpc error: code = NotFound desc = could not find container \"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\": container with ID starting with cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.288908 4828 scope.go:117] "RemoveContainer" containerID="5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.290414 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\": container with ID starting with 5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9 not found: ID does not exist" containerID="5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.290507 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9"} err="failed to get container status \"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\": rpc error: code = NotFound desc = could not find container \"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\": container with ID starting with 5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.290528 4828 scope.go:117] "RemoveContainer" containerID="dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.290944 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d"} err="failed to get container status \"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\": rpc error: code = NotFound desc = could not find container \"dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d\": container with ID starting with dee64df0f34f6b77dc6e30b870926158df11e288723f9b93d7188fe2e39fd09d not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.290968 4828 scope.go:117] "RemoveContainer" containerID="63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.291460 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe"} err="failed to get container status \"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\": rpc error: code = NotFound desc = could not find container \"63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe\": container with ID starting with 63c806a3e8c0e2e4442c433e55bc082a26b8d452db3f914a849b3996614a5cbe not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.291525 4828 scope.go:117] "RemoveContainer" containerID="69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.291910 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad"} err="failed to get container status \"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\": rpc error: code = NotFound desc = could not find container \"69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad\": container with ID starting with 69e5cdce005a2eca307b0938a29db854ecffaeaa6e08527c8685ffb541f2f9ad not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.291935 4828 scope.go:117] "RemoveContainer" containerID="0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.292821 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20"} err="failed to get container status \"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\": rpc error: code = NotFound desc = could not find container \"0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20\": container with ID starting with 0a3abdb7f104945aaa0d7972e1c8a444fbb5851220635dc3e328a7a51c002d20 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.292851 4828 scope.go:117] "RemoveContainer" containerID="ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.293241 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5"} err="failed to get container status \"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\": rpc error: code = NotFound desc = could not find container \"ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5\": container with ID starting with ff6b361027c5378742db51ffe25a83343a604a0c37ca54ba3b291115858acbd5 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.293261 4828 scope.go:117] "RemoveContainer" containerID="cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.293825 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3"} err="failed to get container status \"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\": rpc error: code = NotFound desc = could not find container \"cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3\": container with ID starting with cb5b3d05163af7da9b9190008adfd54627ea5072abc8af3fa7136c18f88221d3 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.293849 4828 scope.go:117] "RemoveContainer" containerID="5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9" Nov 29 07:05:58 crc kubenswrapper[4828]: I1129 07:05:58.294302 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9"} err="failed to get container status \"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\": rpc error: code = NotFound desc = could not find container \"5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9\": container with ID starting with 5fddcd2778ca81edea9d8d4bfb83864c6a68f68f3eecab795528f6a2dbc16bf9 not found: ID does not exist" Nov 29 07:05:58 crc kubenswrapper[4828]: E1129 07:05:58.676534 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="1.6s" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.144974 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.146802 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8e75326f5bc81545fafd277f3d41240c6a84981e994a534a42aec08805ade4c1"} Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.147381 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.147777 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.148207 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.148673 4828 generic.go:334] "Generic (PLEG): container finished" podID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" containerID="1e3482ebe0278d4a99c2e9aea456d7ce4efbf05ebff9846842c41f0cd72edc64" exitCode=0 Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.148701 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.148844 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88aba7cf-dd10-469c-aea3-11ea4f6b6a01","Type":"ContainerDied","Data":"1e3482ebe0278d4a99c2e9aea456d7ce4efbf05ebff9846842c41f0cd72edc64"} Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.149013 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.150096 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.150331 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.152709 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.152929 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.153249 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.153516 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.153708 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.153964 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.154221 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.154478 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.154748 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.155005 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.156910 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.157174 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.184430 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.184847 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.185170 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.185668 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.186246 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.186821 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.187678 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.197872 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.198302 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.198492 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.205487 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.205987 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.206150 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.206605 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.207077 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.207505 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.208128 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.208809 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.209127 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.209590 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.210009 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.211124 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.211782 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.212086 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.212162 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.212786 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.213487 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.213743 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.214024 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.214206 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.214763 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.214996 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.215176 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.215638 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.215935 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.216347 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.216656 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.216935 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.217155 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.217386 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.217775 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.218041 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.218226 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:05:59 crc kubenswrapper[4828]: I1129 07:05:59.420292 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 29 07:06:00 crc kubenswrapper[4828]: E1129 07:06:00.278230 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="3.2s" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.409542 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.410131 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.410417 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.410630 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.410815 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.411018 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.411222 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.411432 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.411618 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.411804 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.548877 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir\") pod \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.548967 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock\") pod \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.548994 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88aba7cf-dd10-469c-aea3-11ea4f6b6a01" (UID: "88aba7cf-dd10-469c-aea3-11ea4f6b6a01"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.549019 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access\") pod \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\" (UID: \"88aba7cf-dd10-469c-aea3-11ea4f6b6a01\") " Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.549050 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock" (OuterVolumeSpecName: "var-lock") pod "88aba7cf-dd10-469c-aea3-11ea4f6b6a01" (UID: "88aba7cf-dd10-469c-aea3-11ea4f6b6a01"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.549281 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.549314 4828 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.557928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88aba7cf-dd10-469c-aea3-11ea4f6b6a01" (UID: "88aba7cf-dd10-469c-aea3-11ea4f6b6a01"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:00 crc kubenswrapper[4828]: I1129 07:06:00.650528 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88aba7cf-dd10-469c-aea3-11ea4f6b6a01-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.166311 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88aba7cf-dd10-469c-aea3-11ea4f6b6a01","Type":"ContainerDied","Data":"8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9"} Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.166359 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2357d4b99ece83c1f99912d75a086c3805476a13a89c1d69a7cce70ffdfdb9" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.166406 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.180988 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.181335 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.181737 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.182225 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.182751 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.183062 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.183417 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.183716 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.183974 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.414366 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.414590 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.414943 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.415461 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.415813 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.417345 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.417744 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.418074 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.418896 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.419140 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.419781 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.420054 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.420466 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.421046 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.421361 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.421600 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.421995 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.422393 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.423303 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.434339 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.434394 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:01 crc kubenswrapper[4828]: E1129 07:06:01.434906 4828 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:01 crc kubenswrapper[4828]: I1129 07:06:01.435505 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4828]: I1129 07:06:02.173981 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8621bb0db72157df527e6f11a7381dc6933fa7dc3a7d44536b58e687f52fa7d3"} Nov 29 07:06:02 crc kubenswrapper[4828]: I1129 07:06:02.539317 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:06:02 crc kubenswrapper[4828]: I1129 07:06:02.539416 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:06:02 crc kubenswrapper[4828]: E1129 07:06:02.831354 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c686807cc518f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,LastTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:06:03 crc kubenswrapper[4828]: E1129 07:06:03.479112 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="6.4s" Nov 29 07:06:06 crc kubenswrapper[4828]: I1129 07:06:06.280356 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:06:06 crc kubenswrapper[4828]: I1129 07:06:06.280714 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:06:09 crc kubenswrapper[4828]: E1129 07:06:09.880553 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.96:6443: connect: connection refused" interval="7s" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.418409 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.419868 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.420613 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.421210 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.421614 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.421914 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.422310 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.422615 4828 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.422918 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:11 crc kubenswrapper[4828]: I1129 07:06:11.423307 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:12 crc kubenswrapper[4828]: I1129 07:06:12.538483 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:06:12 crc kubenswrapper[4828]: I1129 07:06:12.538579 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:06:12 crc kubenswrapper[4828]: E1129 07:06:12.832146 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.96:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c686807cc518f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,LastTimestamp:2025-11-29 07:05:50.039110031 +0000 UTC m=+289.661186089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.254446 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.255630 4828 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca" exitCode=1 Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.255733 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca"} Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.256630 4828 scope.go:117] "RemoveContainer" containerID="cb42b9911132f63e68af89d4ed0ca3cfdcd6615a6a9d14f5f34d55a39b264fca" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.257350 4828 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f46d1badfe6abd163e2ceb8b00c3de130825f856c18614c7167644c8a4c63eca" exitCode=0 Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.257397 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f46d1badfe6abd163e2ceb8b00c3de130825f856c18614c7167644c8a4c63eca"} Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.257598 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.257630 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.258434 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: E1129 07:06:15.258563 4828 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.258859 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.259491 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.259790 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.260122 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.260877 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.261916 4828 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.262682 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.263086 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.263772 4828 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.264102 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.265491 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.265830 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.266034 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.266344 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.266827 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.267495 4828 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.267933 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.268786 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.269569 4828 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.270020 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.270459 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.547735 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.550416 4828 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.550678 4828 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.551036 4828 status_manager.go:851] "Failed to get status for pod" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.551233 4828 status_manager.go:851] "Failed to get status for pod" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" pod="openshift-marketplace/redhat-operators-r5hqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r5hqw\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.551422 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.551642 4828 status_manager.go:851] "Failed to get status for pod" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" pod="openshift-marketplace/community-operators-vpwkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vpwkr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.552093 4828 status_manager.go:851] "Failed to get status for pod" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" pod="openshift-marketplace/redhat-marketplace-gh2x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gh2x8\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.552603 4828 status_manager.go:851] "Failed to get status for pod" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" pod="openshift-marketplace/community-operators-jxws9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jxws9\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.553009 4828 status_manager.go:851] "Failed to get status for pod" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" pod="openshift-marketplace/redhat-operators-twkcr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-twkcr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.553454 4828 status_manager.go:851] "Failed to get status for pod" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" pod="openshift-marketplace/certified-operators-db4cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-db4cv\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.553753 4828 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:15 crc kubenswrapper[4828]: I1129 07:06:15.554365 4828 status_manager.go:851] "Failed to get status for pod" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" pod="openshift-marketplace/certified-operators-b2qvr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2qvr\": dial tcp 38.129.56.96:6443: connect: connection refused" Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.273629 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.274253 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"acb7b193918ced8344bb8a289012f0cde0a30539b154344471e687eff669ed17"} Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.279097 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0e877caeef31b2bb11571fd20b346523bc68999bbe0c409a6a9b26d4155f80c2"} Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.279150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aadc3e7a54537eaf0a88c590f2f6cb9386b59b58d4f7b65c7f04c4ca3bb29545"} Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.279165 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fb78b078363f41defab31ebeb78841f6652fa5293e8cd4a23da57ca68a72e989"} Nov 29 07:06:16 crc kubenswrapper[4828]: I1129 07:06:16.279180 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e66bc9efa4ad95aff4d77433dc9cb6c4a5428650911ab76e85bf3be9e8da4c2"} Nov 29 07:06:17 crc kubenswrapper[4828]: I1129 07:06:17.299933 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7502a1f9b586d8fac50832f3e1e6e13d4e69eecf53e7dce4b1bca73165989875"} Nov 29 07:06:17 crc kubenswrapper[4828]: I1129 07:06:17.300273 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:17 crc kubenswrapper[4828]: I1129 07:06:17.300316 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:17 crc kubenswrapper[4828]: I1129 07:06:17.300470 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:21 crc kubenswrapper[4828]: I1129 07:06:21.436401 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:21 crc kubenswrapper[4828]: I1129 07:06:21.436884 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:21 crc kubenswrapper[4828]: I1129 07:06:21.442438 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:22 crc kubenswrapper[4828]: I1129 07:06:22.322464 4828 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:22 crc kubenswrapper[4828]: I1129 07:06:22.398676 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ffaa1eca-361b-448e-9136-403bc8490f31" Nov 29 07:06:22 crc kubenswrapper[4828]: I1129 07:06:22.538414 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:23 crc kubenswrapper[4828]: I1129 07:06:23.154671 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:23 crc kubenswrapper[4828]: I1129 07:06:23.159642 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:23 crc kubenswrapper[4828]: I1129 07:06:23.332077 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:23 crc kubenswrapper[4828]: I1129 07:06:23.332120 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:23 crc kubenswrapper[4828]: I1129 07:06:23.335950 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:24 crc kubenswrapper[4828]: I1129 07:06:24.337538 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:24 crc kubenswrapper[4828]: I1129 07:06:24.337845 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:31 crc kubenswrapper[4828]: I1129 07:06:31.443245 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:31 crc kubenswrapper[4828]: I1129 07:06:31.445171 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:31 crc kubenswrapper[4828]: I1129 07:06:31.445202 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1b827f94-2d17-4e3d-a5cd-56b1b65eeeaf" Nov 29 07:06:31 crc kubenswrapper[4828]: I1129 07:06:31.454430 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ffaa1eca-361b-448e-9136-403bc8490f31" Nov 29 07:06:32 crc kubenswrapper[4828]: I1129 07:06:32.543648 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:47 crc kubenswrapper[4828]: I1129 07:06:47.256473 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:06:47 crc kubenswrapper[4828]: I1129 07:06:47.456084 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:06:48 crc kubenswrapper[4828]: I1129 07:06:48.015836 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:06:48 crc kubenswrapper[4828]: I1129 07:06:48.182746 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:06:49 crc kubenswrapper[4828]: I1129 07:06:49.076431 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:06:49 crc kubenswrapper[4828]: I1129 07:06:49.736815 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:06:50 crc kubenswrapper[4828]: I1129 07:06:50.521654 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:06:51 crc kubenswrapper[4828]: I1129 07:06:51.286701 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:06:51 crc kubenswrapper[4828]: I1129 07:06:51.807050 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.215782 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.243317 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.451828 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.548492 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.611859 4828 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.612561 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=63.612464113 podStartE2EDuration="1m3.612464113s" podCreationTimestamp="2025-11-29 07:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:06:22.39590005 +0000 UTC m=+322.017976108" watchObservedRunningTime="2025-11-29 07:06:52.612464113 +0000 UTC m=+352.234540161" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.616664 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.616725 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.641357 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=30.641338015 podStartE2EDuration="30.641338015s" podCreationTimestamp="2025-11-29 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:06:52.639344708 +0000 UTC m=+352.261420756" watchObservedRunningTime="2025-11-29 07:06:52.641338015 +0000 UTC m=+352.263414093" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.714643 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.716484 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:06:52 crc kubenswrapper[4828]: I1129 07:06:52.741985 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:06:53 crc kubenswrapper[4828]: I1129 07:06:53.381732 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:06:53 crc kubenswrapper[4828]: I1129 07:06:53.647314 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:06:53 crc kubenswrapper[4828]: I1129 07:06:53.889572 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:06:54 crc kubenswrapper[4828]: I1129 07:06:54.296610 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:06:54 crc kubenswrapper[4828]: I1129 07:06:54.369064 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:06:54 crc kubenswrapper[4828]: I1129 07:06:54.535455 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:06:54 crc kubenswrapper[4828]: I1129 07:06:54.745488 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:06:54 crc kubenswrapper[4828]: I1129 07:06:54.782856 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:06:55 crc kubenswrapper[4828]: I1129 07:06:55.072329 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:06:55 crc kubenswrapper[4828]: I1129 07:06:55.209085 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:06:55 crc kubenswrapper[4828]: I1129 07:06:55.832413 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:06:55 crc kubenswrapper[4828]: I1129 07:06:55.892119 4828 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:06:55 crc kubenswrapper[4828]: I1129 07:06:55.912724 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.162216 4828 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.162560 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8e75326f5bc81545fafd277f3d41240c6a84981e994a534a42aec08805ade4c1" gracePeriod=5 Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.196763 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.239792 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.482026 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.614279 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.700178 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.716664 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:06:56 crc kubenswrapper[4828]: I1129 07:06:56.875259 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.009502 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.133737 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.180443 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.194486 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.458299 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.568173 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.778729 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.849217 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:06:57 crc kubenswrapper[4828]: I1129 07:06:57.923955 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.060428 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.207694 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.676831 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.700753 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.722455 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.738483 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.798710 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.819113 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:06:58 crc kubenswrapper[4828]: I1129 07:06:58.920063 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.067340 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.110047 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.291145 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.569261 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.646475 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.687948 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:06:59 crc kubenswrapper[4828]: I1129 07:06:59.766791 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.087052 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.143448 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.296664 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.370506 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.469622 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.525616 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.526010 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.583916 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.657232 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.691894 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.817926 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:07:00 crc kubenswrapper[4828]: I1129 07:07:00.963508 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.272262 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.435959 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.477610 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.492166 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.690625 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.910642 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:07:01 crc kubenswrapper[4828]: I1129 07:07:01.938427 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.185466 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.240093 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.324510 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.475540 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.576200 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.679368 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.722588 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:07:02 crc kubenswrapper[4828]: I1129 07:07:02.993128 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.104547 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.158416 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.382345 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.467423 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.606623 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.719792 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.871723 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:07:03 crc kubenswrapper[4828]: I1129 07:07:03.930651 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.243445 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.273703 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.277661 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.431508 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.483099 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.548682 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.773740 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:07:04 crc kubenswrapper[4828]: I1129 07:07:04.931313 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.003492 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.101624 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.327431 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.470554 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.613000 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.653982 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.693377 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.935883 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.973434 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.974020 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:07:05 crc kubenswrapper[4828]: I1129 07:07:05.987673 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.112440 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.112520 4828 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8e75326f5bc81545fafd277f3d41240c6a84981e994a534a42aec08805ade4c1" exitCode=137 Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.112631 4828 scope.go:117] "RemoveContainer" containerID="8e75326f5bc81545fafd277f3d41240c6a84981e994a534a42aec08805ade4c1" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.122929 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123004 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123076 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123096 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123125 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123148 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123228 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.123243 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.124466 4828 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.124631 4828 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.124871 4828 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.125020 4828 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.125740 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.137771 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.228141 4828 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.521127 4828 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.702923 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.726130 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.737106 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.745188 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.751694 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.778699 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.846236 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.855335 4828 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.885362 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.942608 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:07:06 crc kubenswrapper[4828]: I1129 07:07:06.947362 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.118984 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.331610 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.371762 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.375792 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.418655 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.418949 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.434002 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.434096 4828 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2ca729dd-c447-4e0f-9706-4dc819aaca31" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.438650 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.438941 4828 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2ca729dd-c447-4e0f-9706-4dc819aaca31" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.539609 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.543784 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.655525 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.818234 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:07:07 crc kubenswrapper[4828]: I1129 07:07:07.872516 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.095007 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.275339 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.365348 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.370701 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.549470 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.614008 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.654840 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.711828 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.776702 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.883609 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.914035 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:07:08 crc kubenswrapper[4828]: I1129 07:07:08.966345 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.162953 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.190897 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.256961 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.306897 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.306898 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.326386 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.448612 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.486999 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:07:09 crc kubenswrapper[4828]: I1129 07:07:09.682714 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.071240 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.153482 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.351567 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.436073 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.666362 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.761819 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.778502 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.782681 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:07:10 crc kubenswrapper[4828]: I1129 07:07:10.991103 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.199875 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.270192 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.308506 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.309516 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.349307 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.412255 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.487519 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.487624 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.647221 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.724108 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.797877 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.879632 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:07:11 crc kubenswrapper[4828]: I1129 07:07:11.994529 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.076498 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.189880 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.246881 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.297945 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.316066 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.395879 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.425464 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.519673 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.772899 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.778446 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:07:12 crc kubenswrapper[4828]: I1129 07:07:12.902229 4828 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.439336 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.655113 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.657556 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.694640 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.777522 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.805709 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.893312 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.894354 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:07:13 crc kubenswrapper[4828]: I1129 07:07:13.998225 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.164920 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.433007 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.446605 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.594965 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.793235 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.815108 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.829536 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.897210 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.974540 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:07:14 crc kubenswrapper[4828]: I1129 07:07:14.999860 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:07:15 crc kubenswrapper[4828]: I1129 07:07:15.403767 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:07:15 crc kubenswrapper[4828]: I1129 07:07:15.469322 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:07:15 crc kubenswrapper[4828]: I1129 07:07:15.734441 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:07:15 crc kubenswrapper[4828]: I1129 07:07:15.849712 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.236930 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.258219 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.326052 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.498331 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.566659 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.664313 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:07:16 crc kubenswrapper[4828]: I1129 07:07:16.800799 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.175046 4828 generic.go:334] "Generic (PLEG): container finished" podID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerID="3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77" exitCode=0 Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.175151 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerDied","Data":"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77"} Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.175815 4828 scope.go:117] "RemoveContainer" containerID="3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.236511 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.480156 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.602201 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.631014 4828 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.864644 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.959101 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.959254 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:07:17 crc kubenswrapper[4828]: I1129 07:07:17.978721 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.011254 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.011657 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b2qvr" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="registry-server" containerID="cri-o://4a5f781db81fdabc05e7b02dbead37353877ef046a552098f8548afb684ad85b" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.018660 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.018991 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-db4cv" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="registry-server" containerID="cri-o://7f71177be44eb6dafe0c04817046796d8c8193abac5819ba60da7ed4991c7f8a" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.026480 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.026818 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxws9" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="registry-server" containerID="cri-o://fd4789150dfa94299901fcf6cb0d91e7485402c5760cd86a7674971fbf200b37" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.033765 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.034030 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpwkr" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="registry-server" containerID="cri-o://c07b99bff05677b5b93955bed8db2dc66bb41624e9a2b9117367a8456f26b09a" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.038581 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.050971 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.051504 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gh2x8" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="registry-server" containerID="cri-o://71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.070518 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.071194 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vktx7" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="registry-server" containerID="cri-o://cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.073580 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqxp4"] Nov 29 07:07:18 crc kubenswrapper[4828]: E1129 07:07:18.073943 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.073971 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:07:18 crc kubenswrapper[4828]: E1129 07:07:18.073996 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" containerName="installer" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.074005 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" containerName="installer" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.074192 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="88aba7cf-dd10-469c-aea3-11ea4f6b6a01" containerName="installer" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.074223 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.074897 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.092026 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.092105 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.092375 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqxp4"] Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.092396 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twkcr" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="registry-server" containerID="cri-o://9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.092612 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r5hqw" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="registry-server" containerID="cri-o://9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.192336 4828 generic.go:334] "Generic (PLEG): container finished" podID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerID="c07b99bff05677b5b93955bed8db2dc66bb41624e9a2b9117367a8456f26b09a" exitCode=0 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.192513 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerDied","Data":"c07b99bff05677b5b93955bed8db2dc66bb41624e9a2b9117367a8456f26b09a"} Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.193431 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.193479 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.193747 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stg8p\" (UniqueName: \"kubernetes.io/projected/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-kube-api-access-stg8p\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.200322 4828 generic.go:334] "Generic (PLEG): container finished" podID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerID="4a5f781db81fdabc05e7b02dbead37353877ef046a552098f8548afb684ad85b" exitCode=0 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.200413 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerDied","Data":"4a5f781db81fdabc05e7b02dbead37353877ef046a552098f8548afb684ad85b"} Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.202776 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerStarted","Data":"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265"} Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.202990 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" containerID="cri-o://1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265" gracePeriod=30 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.204096 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.211935 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmxx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.211994 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.215557 4828 generic.go:334] "Generic (PLEG): container finished" podID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerID="fd4789150dfa94299901fcf6cb0d91e7485402c5760cd86a7674971fbf200b37" exitCode=0 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.215719 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerDied","Data":"fd4789150dfa94299901fcf6cb0d91e7485402c5760cd86a7674971fbf200b37"} Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.225626 4828 generic.go:334] "Generic (PLEG): container finished" podID="0a44e830-89c8-428e-ab90-d8936c069de4" containerID="7f71177be44eb6dafe0c04817046796d8c8193abac5819ba60da7ed4991c7f8a" exitCode=0 Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.225684 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerDied","Data":"7f71177be44eb6dafe0c04817046796d8c8193abac5819ba60da7ed4991c7f8a"} Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.295087 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stg8p\" (UniqueName: \"kubernetes.io/projected/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-kube-api-access-stg8p\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.295258 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.295383 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.297552 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.303891 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.317159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stg8p\" (UniqueName: \"kubernetes.io/projected/8d6f6ac7-9c5b-4828-98e7-d047f395ff83-kube-api-access-stg8p\") pod \"marketplace-operator-79b997595-zqxp4\" (UID: \"8d6f6ac7-9c5b-4828-98e7-d047f395ff83\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.324421 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.786077 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.788398 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.791294 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.795441 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.819074 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.821675 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.827191 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.841852 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.845460 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.865999 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hmxx8_5ba8ca1a-d67d-4042-bebb-94891b81644f/marketplace-operator/1.log" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.866672 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.907598 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content\") pod \"0a44e830-89c8-428e-ab90-d8936c069de4\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.907685 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znxdd\" (UniqueName: \"kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd\") pod \"097b513c-f25d-4a6d-9c88-90ac8f322a19\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.907720 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9v4\" (UniqueName: \"kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4\") pod \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.907747 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzt99\" (UniqueName: \"kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99\") pod \"5ba8ca1a-d67d-4042-bebb-94891b81644f\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xkj7\" (UniqueName: \"kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7\") pod \"edc8363b-0cee-48b5-b568-8a694fdc91eb\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908148 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47mv5\" (UniqueName: \"kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5\") pod \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908176 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities\") pod \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908203 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content\") pod \"edc8363b-0cee-48b5-b568-8a694fdc91eb\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908229 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities\") pod \"edc8363b-0cee-48b5-b568-8a694fdc91eb\" (UID: \"edc8363b-0cee-48b5-b568-8a694fdc91eb\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908254 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm555\" (UniqueName: \"kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555\") pod \"0a44e830-89c8-428e-ab90-d8936c069de4\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities\") pod \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908372 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca\") pod \"5ba8ca1a-d67d-4042-bebb-94891b81644f\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908412 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content\") pod \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\" (UID: \"9a9da14c-b652-4eca-bf03-8eedf90d40fe\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908437 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content\") pod \"81124877-aea7-4853-b4da-978dcf29d980\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908465 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content\") pod \"097b513c-f25d-4a6d-9c88-90ac8f322a19\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908488 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities\") pod \"81124877-aea7-4853-b4da-978dcf29d980\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908563 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities\") pod \"097b513c-f25d-4a6d-9c88-90ac8f322a19\" (UID: \"097b513c-f25d-4a6d-9c88-90ac8f322a19\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72bzk\" (UniqueName: \"kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk\") pod \"81124877-aea7-4853-b4da-978dcf29d980\" (UID: \"81124877-aea7-4853-b4da-978dcf29d980\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908635 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics\") pod \"5ba8ca1a-d67d-4042-bebb-94891b81644f\" (UID: \"5ba8ca1a-d67d-4042-bebb-94891b81644f\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908669 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities\") pod \"0a44e830-89c8-428e-ab90-d8936c069de4\" (UID: \"0a44e830-89c8-428e-ab90-d8936c069de4\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.908706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content\") pod \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\" (UID: \"1c5bb383-f3ed-43cd-b62c-38d3e2922f11\") " Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.909482 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities" (OuterVolumeSpecName: "utilities") pod "1c5bb383-f3ed-43cd-b62c-38d3e2922f11" (UID: "1c5bb383-f3ed-43cd-b62c-38d3e2922f11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.909760 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities" (OuterVolumeSpecName: "utilities") pod "81124877-aea7-4853-b4da-978dcf29d980" (UID: "81124877-aea7-4853-b4da-978dcf29d980"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.909819 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities" (OuterVolumeSpecName: "utilities") pod "0a44e830-89c8-428e-ab90-d8936c069de4" (UID: "0a44e830-89c8-428e-ab90-d8936c069de4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.910477 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities" (OuterVolumeSpecName: "utilities") pod "9a9da14c-b652-4eca-bf03-8eedf90d40fe" (UID: "9a9da14c-b652-4eca-bf03-8eedf90d40fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.911150 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5ba8ca1a-d67d-4042-bebb-94891b81644f" (UID: "5ba8ca1a-d67d-4042-bebb-94891b81644f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.911844 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities" (OuterVolumeSpecName: "utilities") pod "edc8363b-0cee-48b5-b568-8a694fdc91eb" (UID: "edc8363b-0cee-48b5-b568-8a694fdc91eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.912400 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities" (OuterVolumeSpecName: "utilities") pod "097b513c-f25d-4a6d-9c88-90ac8f322a19" (UID: "097b513c-f25d-4a6d-9c88-90ac8f322a19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.916531 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4" (OuterVolumeSpecName: "kube-api-access-5q9v4") pod "1c5bb383-f3ed-43cd-b62c-38d3e2922f11" (UID: "1c5bb383-f3ed-43cd-b62c-38d3e2922f11"). InnerVolumeSpecName "kube-api-access-5q9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.925928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5ba8ca1a-d67d-4042-bebb-94891b81644f" (UID: "5ba8ca1a-d67d-4042-bebb-94891b81644f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.935423 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81124877-aea7-4853-b4da-978dcf29d980" (UID: "81124877-aea7-4853-b4da-978dcf29d980"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.937851 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555" (OuterVolumeSpecName: "kube-api-access-rm555") pod "0a44e830-89c8-428e-ab90-d8936c069de4" (UID: "0a44e830-89c8-428e-ab90-d8936c069de4"). InnerVolumeSpecName "kube-api-access-rm555". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.938760 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5" (OuterVolumeSpecName: "kube-api-access-47mv5") pod "9a9da14c-b652-4eca-bf03-8eedf90d40fe" (UID: "9a9da14c-b652-4eca-bf03-8eedf90d40fe"). InnerVolumeSpecName "kube-api-access-47mv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.939213 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7" (OuterVolumeSpecName: "kube-api-access-4xkj7") pod "edc8363b-0cee-48b5-b568-8a694fdc91eb" (UID: "edc8363b-0cee-48b5-b568-8a694fdc91eb"). InnerVolumeSpecName "kube-api-access-4xkj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.939799 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd" (OuterVolumeSpecName: "kube-api-access-znxdd") pod "097b513c-f25d-4a6d-9c88-90ac8f322a19" (UID: "097b513c-f25d-4a6d-9c88-90ac8f322a19"). InnerVolumeSpecName "kube-api-access-znxdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.941130 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk" (OuterVolumeSpecName: "kube-api-access-72bzk") pod "81124877-aea7-4853-b4da-978dcf29d980" (UID: "81124877-aea7-4853-b4da-978dcf29d980"). InnerVolumeSpecName "kube-api-access-72bzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:18 crc kubenswrapper[4828]: I1129 07:07:18.960954 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99" (OuterVolumeSpecName: "kube-api-access-tzt99") pod "5ba8ca1a-d67d-4042-bebb-94891b81644f" (UID: "5ba8ca1a-d67d-4042-bebb-94891b81644f"). InnerVolumeSpecName "kube-api-access-tzt99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010494 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010541 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znxdd\" (UniqueName: \"kubernetes.io/projected/097b513c-f25d-4a6d-9c88-90ac8f322a19-kube-api-access-znxdd\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010577 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzt99\" (UniqueName: \"kubernetes.io/projected/5ba8ca1a-d67d-4042-bebb-94891b81644f-kube-api-access-tzt99\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010589 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9v4\" (UniqueName: \"kubernetes.io/projected/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-kube-api-access-5q9v4\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010601 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xkj7\" (UniqueName: \"kubernetes.io/projected/edc8363b-0cee-48b5-b568-8a694fdc91eb-kube-api-access-4xkj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010611 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47mv5\" (UniqueName: \"kubernetes.io/projected/9a9da14c-b652-4eca-bf03-8eedf90d40fe-kube-api-access-47mv5\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010622 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010634 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010656 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm555\" (UniqueName: \"kubernetes.io/projected/0a44e830-89c8-428e-ab90-d8936c069de4-kube-api-access-rm555\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010667 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010679 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010692 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010704 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81124877-aea7-4853-b4da-978dcf29d980-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010716 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010728 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72bzk\" (UniqueName: \"kubernetes.io/projected/81124877-aea7-4853-b4da-978dcf29d980-kube-api-access-72bzk\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.010740 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5ba8ca1a-d67d-4042-bebb-94891b81644f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.022999 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a9da14c-b652-4eca-bf03-8eedf90d40fe" (UID: "9a9da14c-b652-4eca-bf03-8eedf90d40fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.024534 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.027345 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c5bb383-f3ed-43cd-b62c-38d3e2922f11" (UID: "1c5bb383-f3ed-43cd-b62c-38d3e2922f11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.027495 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a44e830-89c8-428e-ab90-d8936c069de4" (UID: "0a44e830-89c8-428e-ab90-d8936c069de4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.100202 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "097b513c-f25d-4a6d-9c88-90ac8f322a19" (UID: "097b513c-f25d-4a6d-9c88-90ac8f322a19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.104485 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edc8363b-0cee-48b5-b568-8a694fdc91eb" (UID: "edc8363b-0cee-48b5-b568-8a694fdc91eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.111618 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc8363b-0cee-48b5-b568-8a694fdc91eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.111657 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9da14c-b652-4eca-bf03-8eedf90d40fe-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.111670 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097b513c-f25d-4a6d-9c88-90ac8f322a19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.111683 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5bb383-f3ed-43cd-b62c-38d3e2922f11-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.111727 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a44e830-89c8-428e-ab90-d8936c069de4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.163451 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.184231 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.184455 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.200951 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.212714 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities\") pod \"eccbf47b-47fe-4980-b09b-cde621bb188a\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.212788 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content\") pod \"35451e26-ec80-4e68-bf86-4f0990c394af\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.212875 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vnz\" (UniqueName: \"kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz\") pod \"35451e26-ec80-4e68-bf86-4f0990c394af\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.212936 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities\") pod \"35451e26-ec80-4e68-bf86-4f0990c394af\" (UID: \"35451e26-ec80-4e68-bf86-4f0990c394af\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.212992 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7ts8\" (UniqueName: \"kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8\") pod \"eccbf47b-47fe-4980-b09b-cde621bb188a\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.213087 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content\") pod \"eccbf47b-47fe-4980-b09b-cde621bb188a\" (UID: \"eccbf47b-47fe-4980-b09b-cde621bb188a\") " Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.213783 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities" (OuterVolumeSpecName: "utilities") pod "eccbf47b-47fe-4980-b09b-cde621bb188a" (UID: "eccbf47b-47fe-4980-b09b-cde621bb188a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.214664 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities" (OuterVolumeSpecName: "utilities") pod "35451e26-ec80-4e68-bf86-4f0990c394af" (UID: "35451e26-ec80-4e68-bf86-4f0990c394af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.222440 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8" (OuterVolumeSpecName: "kube-api-access-s7ts8") pod "eccbf47b-47fe-4980-b09b-cde621bb188a" (UID: "eccbf47b-47fe-4980-b09b-cde621bb188a"). InnerVolumeSpecName "kube-api-access-s7ts8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.225577 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz" (OuterVolumeSpecName: "kube-api-access-q2vnz") pod "35451e26-ec80-4e68-bf86-4f0990c394af" (UID: "35451e26-ec80-4e68-bf86-4f0990c394af"). InnerVolumeSpecName "kube-api-access-q2vnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.252862 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db4cv" event={"ID":"0a44e830-89c8-428e-ab90-d8936c069de4","Type":"ContainerDied","Data":"03a4d399bd5339c4e06a1ccb3da366be0ee7cfa0375c5a0e63dfce6593dde172"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.253164 4828 scope.go:117] "RemoveContainer" containerID="7f71177be44eb6dafe0c04817046796d8c8193abac5819ba60da7ed4991c7f8a" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.253345 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db4cv" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.261199 4828 generic.go:334] "Generic (PLEG): container finished" podID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerID="9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015" exitCode=0 Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.261285 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerDied","Data":"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.261315 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twkcr" event={"ID":"edc8363b-0cee-48b5-b568-8a694fdc91eb","Type":"ContainerDied","Data":"0a5161f37193fe65fbbf6419e25819e5daad30b533db25d67a67af189a166d7c"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.261390 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twkcr" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.265874 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35451e26-ec80-4e68-bf86-4f0990c394af" (UID: "35451e26-ec80-4e68-bf86-4f0990c394af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.268731 4828 generic.go:334] "Generic (PLEG): container finished" podID="81124877-aea7-4853-b4da-978dcf29d980" containerID="cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd" exitCode=0 Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.268812 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vktx7" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.268832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerDied","Data":"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.268872 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vktx7" event={"ID":"81124877-aea7-4853-b4da-978dcf29d980","Type":"ContainerDied","Data":"ef9786a1014fac680ff907ff0dcbd1b8ac431418553f01aaad3fa08277523548"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.272579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpwkr" event={"ID":"eccbf47b-47fe-4980-b09b-cde621bb188a","Type":"ContainerDied","Data":"b783a3108dfb3cab40e52d83436f6c901942945371bb78d610eea0e31826f1a0"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.273551 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpwkr" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.287569 4828 scope.go:117] "RemoveContainer" containerID="f201724c66c0747f2ceee1084b859440246c6f425417ffea800c5811eeb82568" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.291662 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eccbf47b-47fe-4980-b09b-cde621bb188a" (UID: "eccbf47b-47fe-4980-b09b-cde621bb188a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.293848 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2qvr" event={"ID":"1c5bb383-f3ed-43cd-b62c-38d3e2922f11","Type":"ContainerDied","Data":"578b5ccc91e7a8325c100fc75b1ae7a84f48368ac9472de97261c8ad64124d68"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.294196 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2qvr" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.295444 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqxp4"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.306565 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hmxx8_5ba8ca1a-d67d-4042-bebb-94891b81644f/marketplace-operator/1.log" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.309496 4828 generic.go:334] "Generic (PLEG): container finished" podID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerID="1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265" exitCode=2 Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.309626 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerDied","Data":"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.309691 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" event={"ID":"5ba8ca1a-d67d-4042-bebb-94891b81644f","Type":"ContainerDied","Data":"3566f9402e04f0cb9f1b44366f98ccb1ba1accdfc7b46073ef6fad8191b41271"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.309665 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmxx8" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315529 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxws9" event={"ID":"9a9da14c-b652-4eca-bf03-8eedf90d40fe","Type":"ContainerDied","Data":"e3a991bcd28ae647611cb7e04760352853d7c4d777abb2312867645bd31949a9"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315752 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxws9" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315790 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315821 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7ts8\" (UniqueName: \"kubernetes.io/projected/eccbf47b-47fe-4980-b09b-cde621bb188a-kube-api-access-s7ts8\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315837 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315850 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccbf47b-47fe-4980-b09b-cde621bb188a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315861 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35451e26-ec80-4e68-bf86-4f0990c394af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.315874 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vnz\" (UniqueName: \"kubernetes.io/projected/35451e26-ec80-4e68-bf86-4f0990c394af-kube-api-access-q2vnz\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.318745 4828 generic.go:334] "Generic (PLEG): container finished" podID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerID="9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293" exitCode=0 Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.318783 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerDied","Data":"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.319029 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5hqw" event={"ID":"097b513c-f25d-4a6d-9c88-90ac8f322a19","Type":"ContainerDied","Data":"93a95d1ef35062b9a906135b8a205bf415137620181a57b590396c25467b2124"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.319057 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5hqw" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.329820 4828 generic.go:334] "Generic (PLEG): container finished" podID="35451e26-ec80-4e68-bf86-4f0990c394af" containerID="71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543" exitCode=0 Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.329910 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerDied","Data":"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.329946 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh2x8" event={"ID":"35451e26-ec80-4e68-bf86-4f0990c394af","Type":"ContainerDied","Data":"18eec56362e747ca7afd0e8b91b82239e0083e8e32cd71e706178fee193bf888"} Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.330061 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh2x8" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.332306 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.333065 4828 scope.go:117] "RemoveContainer" containerID="107985ab855786e5d558ca78e90711c98985c57920b2194ca91a3846905a4771" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.336113 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-db4cv"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.348497 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.354152 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vktx7"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.359317 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.363853 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twkcr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.371773 4828 scope.go:117] "RemoveContainer" containerID="9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.388891 4828 scope.go:117] "RemoveContainer" containerID="835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.421496 4828 scope.go:117] "RemoveContainer" containerID="fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.424365 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" path="/var/lib/kubelet/pods/0a44e830-89c8-428e-ab90-d8936c069de4/volumes" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.425163 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81124877-aea7-4853-b4da-978dcf29d980" path="/var/lib/kubelet/pods/81124877-aea7-4853-b4da-978dcf29d980/volumes" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.425973 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" path="/var/lib/kubelet/pods/edc8363b-0cee-48b5-b568-8a694fdc91eb/volumes" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.427689 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.428186 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxws9"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.435078 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.440234 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh2x8"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.447057 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.447606 4828 scope.go:117] "RemoveContainer" containerID="9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.448186 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015\": container with ID starting with 9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015 not found: ID does not exist" containerID="9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.448313 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015"} err="failed to get container status \"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015\": rpc error: code = NotFound desc = could not find container \"9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015\": container with ID starting with 9d599274cbaabb060d99ed6f234a6ec172d63155cb7be254326a1451f86df015 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.448416 4828 scope.go:117] "RemoveContainer" containerID="835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.448879 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e\": container with ID starting with 835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e not found: ID does not exist" containerID="835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.449591 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e"} err="failed to get container status \"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e\": rpc error: code = NotFound desc = could not find container \"835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e\": container with ID starting with 835093c6ef72bb8e075a68161cb769640f38d159b9a0d963ca28edb4fe073e2e not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.449675 4828 scope.go:117] "RemoveContainer" containerID="fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.450059 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2\": container with ID starting with fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2 not found: ID does not exist" containerID="fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.450160 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2"} err="failed to get container status \"fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2\": rpc error: code = NotFound desc = could not find container \"fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2\": container with ID starting with fcdae28d388ec61fcd67268fc1f84e4d2278c4e8e083211f54298d849bd5dee2 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.450232 4828 scope.go:117] "RemoveContainer" containerID="cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.451455 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmxx8"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.451954 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.462796 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.471415 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b2qvr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.472851 4828 scope.go:117] "RemoveContainer" containerID="0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.481808 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.487665 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r5hqw"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.491840 4828 scope.go:117] "RemoveContainer" containerID="96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.504127 4828 scope.go:117] "RemoveContainer" containerID="cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.504641 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd\": container with ID starting with cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd not found: ID does not exist" containerID="cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.504701 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd"} err="failed to get container status \"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd\": rpc error: code = NotFound desc = could not find container \"cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd\": container with ID starting with cea5d2f28a988dca7a0d0bc233355f32300f9a7d117a5afc5fa685fe2950fabd not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.505137 4828 scope.go:117] "RemoveContainer" containerID="0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.505502 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28\": container with ID starting with 0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28 not found: ID does not exist" containerID="0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.505528 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28"} err="failed to get container status \"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28\": rpc error: code = NotFound desc = could not find container \"0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28\": container with ID starting with 0c2947e13356c80bc3455687505ebc6039a3948c7ddb765b72568ea1e77faa28 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.505545 4828 scope.go:117] "RemoveContainer" containerID="96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.505984 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4\": container with ID starting with 96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4 not found: ID does not exist" containerID="96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.506011 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4"} err="failed to get container status \"96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4\": rpc error: code = NotFound desc = could not find container \"96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4\": container with ID starting with 96cc998b65e711362fd60fc875a93efde52cfaf91a04a1ea4bfa1ee7667b79b4 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.506027 4828 scope.go:117] "RemoveContainer" containerID="c07b99bff05677b5b93955bed8db2dc66bb41624e9a2b9117367a8456f26b09a" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.519947 4828 scope.go:117] "RemoveContainer" containerID="6addb6af3d835ecfe8ed2494dd9630c3605cb8108304a598a92d14a9d440e42c" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.536185 4828 scope.go:117] "RemoveContainer" containerID="ffff0fbcb978a51f0a4740c11383b0ca85ba6ec5be605de812a0b9403e6dfa4d" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.551957 4828 scope.go:117] "RemoveContainer" containerID="4a5f781db81fdabc05e7b02dbead37353877ef046a552098f8548afb684ad85b" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.568812 4828 scope.go:117] "RemoveContainer" containerID="ebeb6f36810ea7dae384688486c7918158e6621dcc59a346f47e4bb202e665ba" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.595570 4828 scope.go:117] "RemoveContainer" containerID="abea0050fe7ba1da805e8d49f283380724ded4b9a8d3ec1bf595ce67bd2313c8" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.596247 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.599422 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpwkr"] Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.614699 4828 scope.go:117] "RemoveContainer" containerID="1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.628106 4828 scope.go:117] "RemoveContainer" containerID="3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.645211 4828 scope.go:117] "RemoveContainer" containerID="1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.645648 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265\": container with ID starting with 1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265 not found: ID does not exist" containerID="1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.645678 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265"} err="failed to get container status \"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265\": rpc error: code = NotFound desc = could not find container \"1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265\": container with ID starting with 1000791b8cb87dd1bb011800fe78c6f35113543fc3516b4a239380340c263265 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.645703 4828 scope.go:117] "RemoveContainer" containerID="3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.646163 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77\": container with ID starting with 3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77 not found: ID does not exist" containerID="3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.646204 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77"} err="failed to get container status \"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77\": rpc error: code = NotFound desc = could not find container \"3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77\": container with ID starting with 3172a42d5f8110f44e34db1dfec5519db7aa33bcb60a58de6dc264065bb01a77 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.646240 4828 scope.go:117] "RemoveContainer" containerID="fd4789150dfa94299901fcf6cb0d91e7485402c5760cd86a7674971fbf200b37" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.661179 4828 scope.go:117] "RemoveContainer" containerID="85883e2b2db1183483f9441f959fa5535d40dcd157df6fc4cade711a6480a875" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.678227 4828 scope.go:117] "RemoveContainer" containerID="e760616a0e4c4285d330aaad58e30718487092dbc67c9f02c413f490e0373c65" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.678905 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.693597 4828 scope.go:117] "RemoveContainer" containerID="9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.706500 4828 scope.go:117] "RemoveContainer" containerID="72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.717857 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.720231 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.726391 4828 scope.go:117] "RemoveContainer" containerID="be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.740418 4828 scope.go:117] "RemoveContainer" containerID="9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.741150 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293\": container with ID starting with 9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293 not found: ID does not exist" containerID="9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.741212 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293"} err="failed to get container status \"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293\": rpc error: code = NotFound desc = could not find container \"9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293\": container with ID starting with 9cc78342c8838f578ae52889a768947d187b351b8a3d2057f86364af88b8a293 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.741254 4828 scope.go:117] "RemoveContainer" containerID="72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.741840 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2\": container with ID starting with 72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2 not found: ID does not exist" containerID="72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.741898 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2"} err="failed to get container status \"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2\": rpc error: code = NotFound desc = could not find container \"72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2\": container with ID starting with 72eb8aa9f0a28e649a917b02ef9fe63bb194175b445b9c4108dacd163c7387c2 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.741935 4828 scope.go:117] "RemoveContainer" containerID="be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.742416 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98\": container with ID starting with be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98 not found: ID does not exist" containerID="be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.742478 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98"} err="failed to get container status \"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98\": rpc error: code = NotFound desc = could not find container \"be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98\": container with ID starting with be2fd19307c108a4245c4ca7c90a735785cde2914f30d63e3d909cfeab232a98 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.742509 4828 scope.go:117] "RemoveContainer" containerID="71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.757137 4828 scope.go:117] "RemoveContainer" containerID="6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.771348 4828 scope.go:117] "RemoveContainer" containerID="06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.784350 4828 scope.go:117] "RemoveContainer" containerID="71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.784825 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543\": container with ID starting with 71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543 not found: ID does not exist" containerID="71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.784864 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543"} err="failed to get container status \"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543\": rpc error: code = NotFound desc = could not find container \"71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543\": container with ID starting with 71913356c8df2b2facbff98e5b645e5300ab20f6423476b7663b8b339b21b543 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.784895 4828 scope.go:117] "RemoveContainer" containerID="6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.785474 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520\": container with ID starting with 6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520 not found: ID does not exist" containerID="6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.785507 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520"} err="failed to get container status \"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520\": rpc error: code = NotFound desc = could not find container \"6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520\": container with ID starting with 6d33a8d8d489027e413b9a5dda78346ad029644bf2b332bc2a005d4430c79520 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.785550 4828 scope.go:117] "RemoveContainer" containerID="06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575" Nov 29 07:07:19 crc kubenswrapper[4828]: E1129 07:07:19.785930 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575\": container with ID starting with 06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575 not found: ID does not exist" containerID="06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.785963 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575"} err="failed to get container status \"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575\": rpc error: code = NotFound desc = could not find container \"06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575\": container with ID starting with 06c333e635aee94c76045ed9adc23620a47bae78f379096c688b4cad8ba53575 not found: ID does not exist" Nov 29 07:07:19 crc kubenswrapper[4828]: I1129 07:07:19.819364 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.351576 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" event={"ID":"8d6f6ac7-9c5b-4828-98e7-d047f395ff83","Type":"ContainerStarted","Data":"d855ad51ca81e9a23a5ab48cbd4568ed2989c01e20c92bb97f4e322063afba62"} Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.351920 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" event={"ID":"8d6f6ac7-9c5b-4828-98e7-d047f395ff83","Type":"ContainerStarted","Data":"6d736f706c73e068508943c965f6f5fd1d15eb0a49c3d73925e7477a87f15074"} Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.665825 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.671995 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.705060 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:07:20 crc kubenswrapper[4828]: I1129 07:07:20.855810 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.358111 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.361459 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.380909 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zqxp4" podStartSLOduration=3.380870693 podStartE2EDuration="3.380870693s" podCreationTimestamp="2025-11-29 07:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:21.378130765 +0000 UTC m=+381.000206833" watchObservedRunningTime="2025-11-29 07:07:21.380870693 +0000 UTC m=+381.002946751" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.426998 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" path="/var/lib/kubelet/pods/097b513c-f25d-4a6d-9c88-90ac8f322a19/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.427699 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" path="/var/lib/kubelet/pods/1c5bb383-f3ed-43cd-b62c-38d3e2922f11/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.428256 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" path="/var/lib/kubelet/pods/35451e26-ec80-4e68-bf86-4f0990c394af/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.429500 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" path="/var/lib/kubelet/pods/5ba8ca1a-d67d-4042-bebb-94891b81644f/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.430023 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" path="/var/lib/kubelet/pods/9a9da14c-b652-4eca-bf03-8eedf90d40fe/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.431182 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" path="/var/lib/kubelet/pods/eccbf47b-47fe-4980-b09b-cde621bb188a/volumes" Nov 29 07:07:21 crc kubenswrapper[4828]: I1129 07:07:21.948629 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.237938 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.405093 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.599230 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.660389 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.695733 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.733093 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:07:22 crc kubenswrapper[4828]: I1129 07:07:22.869250 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:07:23 crc kubenswrapper[4828]: I1129 07:07:23.359716 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:07:24 crc kubenswrapper[4828]: I1129 07:07:24.689012 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:07:24 crc kubenswrapper[4828]: I1129 07:07:24.892734 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:07:25 crc kubenswrapper[4828]: I1129 07:07:25.197896 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:07:25 crc kubenswrapper[4828]: I1129 07:07:25.386218 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:07:25 crc kubenswrapper[4828]: I1129 07:07:25.583826 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:07:41 crc kubenswrapper[4828]: I1129 07:07:41.487235 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:07:41 crc kubenswrapper[4828]: I1129 07:07:41.487891 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.069861 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.070865 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.070901 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.070931 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.070939 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.070951 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.070958 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.070969 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.070976 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.070988 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.070995 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071003 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071011 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071020 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071027 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071034 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071039 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071047 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071053 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071060 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071066 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071076 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071082 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071092 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071098 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071105 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071110 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071118 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071125 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071136 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071143 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071152 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071167 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071177 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071184 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071193 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071199 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071209 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071215 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071223 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071229 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071237 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071242 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="extract-content" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071248 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071254 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071262 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071283 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071296 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071307 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071318 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071326 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="extract-utilities" Nov 29 07:07:54 crc kubenswrapper[4828]: E1129 07:07:54.071337 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071346 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071501 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071517 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc8363b-0cee-48b5-b568-8a694fdc91eb" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071525 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9da14c-b652-4eca-bf03-8eedf90d40fe" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071532 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="097b513c-f25d-4a6d-9c88-90ac8f322a19" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071540 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="35451e26-ec80-4e68-bf86-4f0990c394af" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071550 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5bb383-f3ed-43cd-b62c-38d3e2922f11" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071557 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccbf47b-47fe-4980-b09b-cde621bb188a" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071564 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="81124877-aea7-4853-b4da-978dcf29d980" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071574 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a44e830-89c8-428e-ab90-d8936c069de4" containerName="registry-server" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.071733 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba8ca1a-d67d-4042-bebb-94891b81644f" containerName="marketplace-operator" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.072449 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.077749 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.083551 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.145686 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.145758 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.145905 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs5gg\" (UniqueName: \"kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.247252 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.247328 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.247380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs5gg\" (UniqueName: \"kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.248212 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.248223 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.260626 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84trl"] Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.261612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.263864 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.273964 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84trl"] Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.274865 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs5gg\" (UniqueName: \"kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg\") pod \"community-operators-jr7qs\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.349192 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-catalog-content\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.349319 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4dt\" (UniqueName: \"kubernetes.io/projected/fc085063-478e-40a4-8810-f62d1d6bfa64-kube-api-access-lk4dt\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.349381 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-utilities\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.392970 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.451103 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-catalog-content\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.451167 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4dt\" (UniqueName: \"kubernetes.io/projected/fc085063-478e-40a4-8810-f62d1d6bfa64-kube-api-access-lk4dt\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.451219 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-utilities\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.451970 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-catalog-content\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.452017 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc085063-478e-40a4-8810-f62d1d6bfa64-utilities\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.479549 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4dt\" (UniqueName: \"kubernetes.io/projected/fc085063-478e-40a4-8810-f62d1d6bfa64-kube-api-access-lk4dt\") pod \"certified-operators-84trl\" (UID: \"fc085063-478e-40a4-8810-f62d1d6bfa64\") " pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.578013 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.642780 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:07:54 crc kubenswrapper[4828]: W1129 07:07:54.652817 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d8cfc2c_2879_4633_95e5_8ea070145a47.slice/crio-92c9edf1a88dde6e0587c604f4af074ad93050cf46ea58c4f23320ce579f5ba9 WatchSource:0}: Error finding container 92c9edf1a88dde6e0587c604f4af074ad93050cf46ea58c4f23320ce579f5ba9: Status 404 returned error can't find the container with id 92c9edf1a88dde6e0587c604f4af074ad93050cf46ea58c4f23320ce579f5ba9 Nov 29 07:07:54 crc kubenswrapper[4828]: I1129 07:07:54.775429 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84trl"] Nov 29 07:07:54 crc kubenswrapper[4828]: W1129 07:07:54.784677 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc085063_478e_40a4_8810_f62d1d6bfa64.slice/crio-f501b11c923e61132f4c7ed43215344f44559155e9b53a623169ed8522f15f7d WatchSource:0}: Error finding container f501b11c923e61132f4c7ed43215344f44559155e9b53a623169ed8522f15f7d: Status 404 returned error can't find the container with id f501b11c923e61132f4c7ed43215344f44559155e9b53a623169ed8522f15f7d Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.654967 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerID="880ed5fac18c378e12cfc2789d564af6140afa92b6583d2a3e610cc045f8f331" exitCode=0 Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.655028 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerDied","Data":"880ed5fac18c378e12cfc2789d564af6140afa92b6583d2a3e610cc045f8f331"} Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.655388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerStarted","Data":"92c9edf1a88dde6e0587c604f4af074ad93050cf46ea58c4f23320ce579f5ba9"} Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.659920 4828 generic.go:334] "Generic (PLEG): container finished" podID="fc085063-478e-40a4-8810-f62d1d6bfa64" containerID="9e1be553450b56aee0ef0d354fc61413b72f28d85b05dc08f870dd26d7f83624" exitCode=0 Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.659963 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84trl" event={"ID":"fc085063-478e-40a4-8810-f62d1d6bfa64","Type":"ContainerDied","Data":"9e1be553450b56aee0ef0d354fc61413b72f28d85b05dc08f870dd26d7f83624"} Nov 29 07:07:55 crc kubenswrapper[4828]: I1129 07:07:55.659992 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84trl" event={"ID":"fc085063-478e-40a4-8810-f62d1d6bfa64","Type":"ContainerStarted","Data":"f501b11c923e61132f4c7ed43215344f44559155e9b53a623169ed8522f15f7d"} Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.457751 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9grx"] Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.460634 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.464144 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.465611 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9grx"] Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.593556 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-catalog-content\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.593666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-utilities\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.593887 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2w72\" (UniqueName: \"kubernetes.io/projected/65a880c4-a44a-4fba-9f14-845905e54799-kube-api-access-x2w72\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.657817 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7vccf"] Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.659831 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.662376 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.671061 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vccf"] Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.694965 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2w72\" (UniqueName: \"kubernetes.io/projected/65a880c4-a44a-4fba-9f14-845905e54799-kube-api-access-x2w72\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.695197 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-catalog-content\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.695411 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-utilities\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.697975 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-catalog-content\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.698636 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a880c4-a44a-4fba-9f14-845905e54799-utilities\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.715566 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2w72\" (UniqueName: \"kubernetes.io/projected/65a880c4-a44a-4fba-9f14-845905e54799-kube-api-access-x2w72\") pod \"redhat-marketplace-f9grx\" (UID: \"65a880c4-a44a-4fba-9f14-845905e54799\") " pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.780465 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.796579 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-utilities\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.796701 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6z6r\" (UniqueName: \"kubernetes.io/projected/4766d758-11c2-400b-89fd-4b1de688f74d-kube-api-access-q6z6r\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.796726 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-catalog-content\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.898293 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-utilities\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.898715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6z6r\" (UniqueName: \"kubernetes.io/projected/4766d758-11c2-400b-89fd-4b1de688f74d-kube-api-access-q6z6r\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.898901 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-catalog-content\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.899049 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-utilities\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.900755 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4766d758-11c2-400b-89fd-4b1de688f74d-catalog-content\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.915752 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6z6r\" (UniqueName: \"kubernetes.io/projected/4766d758-11c2-400b-89fd-4b1de688f74d-kube-api-access-q6z6r\") pod \"redhat-operators-7vccf\" (UID: \"4766d758-11c2-400b-89fd-4b1de688f74d\") " pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:56 crc kubenswrapper[4828]: I1129 07:07:56.981626 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:07:57 crc kubenswrapper[4828]: I1129 07:07:57.344671 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9grx"] Nov 29 07:07:57 crc kubenswrapper[4828]: W1129 07:07:57.353693 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65a880c4_a44a_4fba_9f14_845905e54799.slice/crio-3e60e455939228c6462320c43cf5ad2bb508583984a049e0b243a0bb5bf11782 WatchSource:0}: Error finding container 3e60e455939228c6462320c43cf5ad2bb508583984a049e0b243a0bb5bf11782: Status 404 returned error can't find the container with id 3e60e455939228c6462320c43cf5ad2bb508583984a049e0b243a0bb5bf11782 Nov 29 07:07:57 crc kubenswrapper[4828]: I1129 07:07:57.376296 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vccf"] Nov 29 07:07:57 crc kubenswrapper[4828]: W1129 07:07:57.383395 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4766d758_11c2_400b_89fd_4b1de688f74d.slice/crio-9fc8e74c7b199e9d4e3bfa34b6ccfc44a1ed845c064cc755bad5ce54e345d0b6 WatchSource:0}: Error finding container 9fc8e74c7b199e9d4e3bfa34b6ccfc44a1ed845c064cc755bad5ce54e345d0b6: Status 404 returned error can't find the container with id 9fc8e74c7b199e9d4e3bfa34b6ccfc44a1ed845c064cc755bad5ce54e345d0b6 Nov 29 07:07:57 crc kubenswrapper[4828]: I1129 07:07:57.671710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9grx" event={"ID":"65a880c4-a44a-4fba-9f14-845905e54799","Type":"ContainerStarted","Data":"3e60e455939228c6462320c43cf5ad2bb508583984a049e0b243a0bb5bf11782"} Nov 29 07:07:57 crc kubenswrapper[4828]: I1129 07:07:57.674093 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vccf" event={"ID":"4766d758-11c2-400b-89fd-4b1de688f74d","Type":"ContainerStarted","Data":"9fc8e74c7b199e9d4e3bfa34b6ccfc44a1ed845c064cc755bad5ce54e345d0b6"} Nov 29 07:07:58 crc kubenswrapper[4828]: I1129 07:07:58.683834 4828 generic.go:334] "Generic (PLEG): container finished" podID="fc085063-478e-40a4-8810-f62d1d6bfa64" containerID="df755878298cb3b121c60b21d2449f1286b3b6ed5492e58ec3defda748ef40fd" exitCode=0 Nov 29 07:07:58 crc kubenswrapper[4828]: I1129 07:07:58.684891 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84trl" event={"ID":"fc085063-478e-40a4-8810-f62d1d6bfa64","Type":"ContainerDied","Data":"df755878298cb3b121c60b21d2449f1286b3b6ed5492e58ec3defda748ef40fd"} Nov 29 07:07:58 crc kubenswrapper[4828]: I1129 07:07:58.690071 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerStarted","Data":"8fef37a1037ba43c080c66220abf4fbaffb24fbea4ea732ab3e2c5adea64b4c6"} Nov 29 07:07:58 crc kubenswrapper[4828]: I1129 07:07:58.692435 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9grx" event={"ID":"65a880c4-a44a-4fba-9f14-845905e54799","Type":"ContainerStarted","Data":"a15421fc53fac66faa48ac08cba4281c7ed1d4b71ad36fd4b27f4ed270d993d2"} Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.705603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84trl" event={"ID":"fc085063-478e-40a4-8810-f62d1d6bfa64","Type":"ContainerStarted","Data":"0c658dd0dcaa73544fb8b6f8834ec8d2b2df74d43026fc07d81b47ff17954de9"} Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.709830 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerID="8fef37a1037ba43c080c66220abf4fbaffb24fbea4ea732ab3e2c5adea64b4c6" exitCode=0 Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.709937 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerDied","Data":"8fef37a1037ba43c080c66220abf4fbaffb24fbea4ea732ab3e2c5adea64b4c6"} Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.712575 4828 generic.go:334] "Generic (PLEG): container finished" podID="65a880c4-a44a-4fba-9f14-845905e54799" containerID="a15421fc53fac66faa48ac08cba4281c7ed1d4b71ad36fd4b27f4ed270d993d2" exitCode=0 Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.712645 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9grx" event={"ID":"65a880c4-a44a-4fba-9f14-845905e54799","Type":"ContainerDied","Data":"a15421fc53fac66faa48ac08cba4281c7ed1d4b71ad36fd4b27f4ed270d993d2"} Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.716718 4828 generic.go:334] "Generic (PLEG): container finished" podID="4766d758-11c2-400b-89fd-4b1de688f74d" containerID="ee0ce034ae0a104e91c4553735fd791ddb1b9150445620cb67edcc4ed2f35395" exitCode=0 Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.716774 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vccf" event={"ID":"4766d758-11c2-400b-89fd-4b1de688f74d","Type":"ContainerDied","Data":"ee0ce034ae0a104e91c4553735fd791ddb1b9150445620cb67edcc4ed2f35395"} Nov 29 07:07:59 crc kubenswrapper[4828]: I1129 07:07:59.728225 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84trl" podStartSLOduration=1.913298258 podStartE2EDuration="5.72818396s" podCreationTimestamp="2025-11-29 07:07:54 +0000 UTC" firstStartedPulling="2025-11-29 07:07:55.662101201 +0000 UTC m=+415.284177259" lastFinishedPulling="2025-11-29 07:07:59.476986903 +0000 UTC m=+419.099062961" observedRunningTime="2025-11-29 07:07:59.727512302 +0000 UTC m=+419.349588370" watchObservedRunningTime="2025-11-29 07:07:59.72818396 +0000 UTC m=+419.350260018" Nov 29 07:08:00 crc kubenswrapper[4828]: I1129 07:08:00.804914 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:08:00 crc kubenswrapper[4828]: I1129 07:08:00.805280 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerName="controller-manager" containerID="cri-o://b5c8f0a6bfaa5824410552672887091a5a3f8d59cfd550b5683eb4a54d2175cc" gracePeriod=30 Nov 29 07:08:00 crc kubenswrapper[4828]: I1129 07:08:00.909642 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:08:00 crc kubenswrapper[4828]: I1129 07:08:00.909938 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" podUID="681c42c0-27a5-4f76-a992-1855f9fa4be1" containerName="route-controller-manager" containerID="cri-o://b0f2fb7d5f1398054de0ae73259346d8c45ae8e2d4d6a9f487666f73e4f40354" gracePeriod=30 Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.792382 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerStarted","Data":"38c0562c50858a8ed751e33673c0d88dac47f4c463b2a8934984585f0b143ccc"} Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.828880 4828 generic.go:334] "Generic (PLEG): container finished" podID="681c42c0-27a5-4f76-a992-1855f9fa4be1" containerID="b0f2fb7d5f1398054de0ae73259346d8c45ae8e2d4d6a9f487666f73e4f40354" exitCode=0 Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.829026 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" event={"ID":"681c42c0-27a5-4f76-a992-1855f9fa4be1","Type":"ContainerDied","Data":"b0f2fb7d5f1398054de0ae73259346d8c45ae8e2d4d6a9f487666f73e4f40354"} Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.861835 4828 generic.go:334] "Generic (PLEG): container finished" podID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerID="b5c8f0a6bfaa5824410552672887091a5a3f8d59cfd550b5683eb4a54d2175cc" exitCode=0 Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.861906 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" event={"ID":"13bf3905-e3c4-4b60-a233-d459262f9b98","Type":"ContainerDied","Data":"b5c8f0a6bfaa5824410552672887091a5a3f8d59cfd550b5683eb4a54d2175cc"} Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.879426 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jr7qs" podStartSLOduration=2.602688073 podStartE2EDuration="7.879385352s" podCreationTimestamp="2025-11-29 07:07:54 +0000 UTC" firstStartedPulling="2025-11-29 07:07:55.656708405 +0000 UTC m=+415.278784473" lastFinishedPulling="2025-11-29 07:08:00.933405684 +0000 UTC m=+420.555481752" observedRunningTime="2025-11-29 07:08:01.857400393 +0000 UTC m=+421.479476481" watchObservedRunningTime="2025-11-29 07:08:01.879385352 +0000 UTC m=+421.501461410" Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.896416 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.980922 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert\") pod \"13bf3905-e3c4-4b60-a233-d459262f9b98\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.980999 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config\") pod \"13bf3905-e3c4-4b60-a233-d459262f9b98\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.981047 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2969\" (UniqueName: \"kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969\") pod \"13bf3905-e3c4-4b60-a233-d459262f9b98\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.981135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca\") pod \"13bf3905-e3c4-4b60-a233-d459262f9b98\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.981194 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles\") pod \"13bf3905-e3c4-4b60-a233-d459262f9b98\" (UID: \"13bf3905-e3c4-4b60-a233-d459262f9b98\") " Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.982442 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13bf3905-e3c4-4b60-a233-d459262f9b98" (UID: "13bf3905-e3c4-4b60-a233-d459262f9b98"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.982517 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config" (OuterVolumeSpecName: "config") pod "13bf3905-e3c4-4b60-a233-d459262f9b98" (UID: "13bf3905-e3c4-4b60-a233-d459262f9b98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:01 crc kubenswrapper[4828]: I1129 07:08:01.982989 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca" (OuterVolumeSpecName: "client-ca") pod "13bf3905-e3c4-4b60-a233-d459262f9b98" (UID: "13bf3905-e3c4-4b60-a233-d459262f9b98"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:01.990375 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969" (OuterVolumeSpecName: "kube-api-access-s2969") pod "13bf3905-e3c4-4b60-a233-d459262f9b98" (UID: "13bf3905-e3c4-4b60-a233-d459262f9b98"). InnerVolumeSpecName "kube-api-access-s2969". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:01.990976 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13bf3905-e3c4-4b60-a233-d459262f9b98" (UID: "13bf3905-e3c4-4b60-a233-d459262f9b98"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.084157 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.084206 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.084217 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bf3905-e3c4-4b60-a233-d459262f9b98-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.084228 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bf3905-e3c4-4b60-a233-d459262f9b98-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.084242 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2969\" (UniqueName: \"kubernetes.io/projected/13bf3905-e3c4-4b60-a233-d459262f9b98-kube-api-access-s2969\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.124081 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.185736 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkjsf\" (UniqueName: \"kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf\") pod \"681c42c0-27a5-4f76-a992-1855f9fa4be1\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.185884 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert\") pod \"681c42c0-27a5-4f76-a992-1855f9fa4be1\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.185972 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca\") pod \"681c42c0-27a5-4f76-a992-1855f9fa4be1\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.186003 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config\") pod \"681c42c0-27a5-4f76-a992-1855f9fa4be1\" (UID: \"681c42c0-27a5-4f76-a992-1855f9fa4be1\") " Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.187002 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca" (OuterVolumeSpecName: "client-ca") pod "681c42c0-27a5-4f76-a992-1855f9fa4be1" (UID: "681c42c0-27a5-4f76-a992-1855f9fa4be1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.187038 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config" (OuterVolumeSpecName: "config") pod "681c42c0-27a5-4f76-a992-1855f9fa4be1" (UID: "681c42c0-27a5-4f76-a992-1855f9fa4be1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.190287 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf" (OuterVolumeSpecName: "kube-api-access-fkjsf") pod "681c42c0-27a5-4f76-a992-1855f9fa4be1" (UID: "681c42c0-27a5-4f76-a992-1855f9fa4be1"). InnerVolumeSpecName "kube-api-access-fkjsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.192406 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "681c42c0-27a5-4f76-a992-1855f9fa4be1" (UID: "681c42c0-27a5-4f76-a992-1855f9fa4be1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.288317 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.288641 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681c42c0-27a5-4f76-a992-1855f9fa4be1-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.288778 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkjsf\" (UniqueName: \"kubernetes.io/projected/681c42c0-27a5-4f76-a992-1855f9fa4be1-kube-api-access-fkjsf\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.288871 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681c42c0-27a5-4f76-a992-1855f9fa4be1-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.392218 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc"] Nov 29 07:08:02 crc kubenswrapper[4828]: E1129 07:08:02.392947 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerName="controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.393071 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerName="controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: E1129 07:08:02.393369 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681c42c0-27a5-4f76-a992-1855f9fa4be1" containerName="route-controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.393445 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="681c42c0-27a5-4f76-a992-1855f9fa4be1" containerName="route-controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.393689 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="681c42c0-27a5-4f76-a992-1855f9fa4be1" containerName="route-controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.393806 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" containerName="controller-manager" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.394468 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.396991 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.397875 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.455203 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc"] Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.461354 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.495740 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29d671a-585b-4186-ad17-3353c6afcffc-serving-cert\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.495933 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-config\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.495996 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496335 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-client-ca\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496410 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496441 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496472 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496517 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5mc\" (UniqueName: \"kubernetes.io/projected/e29d671a-585b-4186-ad17-3353c6afcffc-kube-api-access-vm5mc\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.496562 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl9c\" (UniqueName: \"kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597472 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-client-ca\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597546 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597575 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597596 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597627 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5mc\" (UniqueName: \"kubernetes.io/projected/e29d671a-585b-4186-ad17-3353c6afcffc-kube-api-access-vm5mc\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597664 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl9c\" (UniqueName: \"kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597714 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29d671a-585b-4186-ad17-3353c6afcffc-serving-cert\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597760 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-config\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.597798 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.599488 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.599657 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.600086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-client-ca\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.600324 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.600417 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29d671a-585b-4186-ad17-3353c6afcffc-config\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.602795 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29d671a-585b-4186-ad17-3353c6afcffc-serving-cert\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.603289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.618560 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5mc\" (UniqueName: \"kubernetes.io/projected/e29d671a-585b-4186-ad17-3353c6afcffc-kube-api-access-vm5mc\") pod \"route-controller-manager-5b68d84b74-lxckc\" (UID: \"e29d671a-585b-4186-ad17-3353c6afcffc\") " pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.618822 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl9c\" (UniqueName: \"kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c\") pod \"controller-manager-59b44b9f79-p7pmt\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.771218 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.784168 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.873369 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vccf" event={"ID":"4766d758-11c2-400b-89fd-4b1de688f74d","Type":"ContainerStarted","Data":"8c11bf1ba331bb5bb7a155b807c9031a09a03ce9260303bfd0616d85861f6efd"} Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.876658 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" event={"ID":"13bf3905-e3c4-4b60-a233-d459262f9b98","Type":"ContainerDied","Data":"5f833d3e6d4a3928e127a65f7c2eebd685097b1d18fb5f489b487e6b9eb40e5a"} Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.876744 4828 scope.go:117] "RemoveContainer" containerID="b5c8f0a6bfaa5824410552672887091a5a3f8d59cfd550b5683eb4a54d2175cc" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.876913 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nz25w" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.888502 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.889225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d" event={"ID":"681c42c0-27a5-4f76-a992-1855f9fa4be1","Type":"ContainerDied","Data":"c84a4333c923b60ee9127c6e08d4a3b410252a3721dd3046d4a033603bad7e26"} Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.903768 4828 generic.go:334] "Generic (PLEG): container finished" podID="65a880c4-a44a-4fba-9f14-845905e54799" containerID="d9b09417e2cf92babd0d672239f86a3a19cdd8d91849f8123522df7b30905660" exitCode=0 Nov 29 07:08:02 crc kubenswrapper[4828]: I1129 07:08:02.905795 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9grx" event={"ID":"65a880c4-a44a-4fba-9f14-845905e54799","Type":"ContainerDied","Data":"d9b09417e2cf92babd0d672239f86a3a19cdd8d91849f8123522df7b30905660"} Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.023829 4828 scope.go:117] "RemoveContainer" containerID="b0f2fb7d5f1398054de0ae73259346d8c45ae8e2d4d6a9f487666f73e4f40354" Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.036832 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.043402 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkf8d"] Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.052162 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.060482 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nz25w"] Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.141667 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc"] Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.295973 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:03 crc kubenswrapper[4828]: W1129 07:08:03.302624 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef43c8c6_f342_4966_9ad2_a8fa451e0b02.slice/crio-d2ba122614384eb0c14c940fcaef7cf7c614c0fad9ace948734630eafd82cbcf WatchSource:0}: Error finding container d2ba122614384eb0c14c940fcaef7cf7c614c0fad9ace948734630eafd82cbcf: Status 404 returned error can't find the container with id d2ba122614384eb0c14c940fcaef7cf7c614c0fad9ace948734630eafd82cbcf Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.419548 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bf3905-e3c4-4b60-a233-d459262f9b98" path="/var/lib/kubelet/pods/13bf3905-e3c4-4b60-a233-d459262f9b98/volumes" Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.420292 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="681c42c0-27a5-4f76-a992-1855f9fa4be1" path="/var/lib/kubelet/pods/681c42c0-27a5-4f76-a992-1855f9fa4be1/volumes" Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.910763 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" event={"ID":"ef43c8c6-f342-4966-9ad2-a8fa451e0b02","Type":"ContainerStarted","Data":"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e"} Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.910816 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" event={"ID":"ef43c8c6-f342-4966-9ad2-a8fa451e0b02","Type":"ContainerStarted","Data":"d2ba122614384eb0c14c940fcaef7cf7c614c0fad9ace948734630eafd82cbcf"} Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.914356 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" event={"ID":"e29d671a-585b-4186-ad17-3353c6afcffc","Type":"ContainerStarted","Data":"bbdf3783d4d41a5f1209d616af6ce184ea45b42967f4c7853e389b9a69262369"} Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.914388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" event={"ID":"e29d671a-585b-4186-ad17-3353c6afcffc","Type":"ContainerStarted","Data":"fe5fbfc1eb92cc0c60b3e5cbaa9ed3d80a4b19130409e1a57276b5c14b5b4af8"} Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.917581 4828 generic.go:334] "Generic (PLEG): container finished" podID="4766d758-11c2-400b-89fd-4b1de688f74d" containerID="8c11bf1ba331bb5bb7a155b807c9031a09a03ce9260303bfd0616d85861f6efd" exitCode=0 Nov 29 07:08:03 crc kubenswrapper[4828]: I1129 07:08:03.917630 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vccf" event={"ID":"4766d758-11c2-400b-89fd-4b1de688f74d","Type":"ContainerDied","Data":"8c11bf1ba331bb5bb7a155b807c9031a09a03ce9260303bfd0616d85861f6efd"} Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.394140 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.394205 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.439740 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.578666 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.578951 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.620759 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.942178 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" podStartSLOduration=3.942158713 podStartE2EDuration="3.942158713s" podCreationTimestamp="2025-11-29 07:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:04.941570287 +0000 UTC m=+424.563646355" watchObservedRunningTime="2025-11-29 07:08:04.942158713 +0000 UTC m=+424.564234791" Nov 29 07:08:04 crc kubenswrapper[4828]: I1129 07:08:04.972966 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84trl" Nov 29 07:08:07 crc kubenswrapper[4828]: I1129 07:08:07.939298 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:07 crc kubenswrapper[4828]: I1129 07:08:07.945706 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:07 crc kubenswrapper[4828]: I1129 07:08:07.958438 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" podStartSLOduration=6.958418748 podStartE2EDuration="6.958418748s" podCreationTimestamp="2025-11-29 07:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:07.957574875 +0000 UTC m=+427.579650933" watchObservedRunningTime="2025-11-29 07:08:07.958418748 +0000 UTC m=+427.580494826" Nov 29 07:08:11 crc kubenswrapper[4828]: I1129 07:08:11.487083 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:08:11 crc kubenswrapper[4828]: I1129 07:08:11.487633 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:08:11 crc kubenswrapper[4828]: I1129 07:08:11.487713 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:08:11 crc kubenswrapper[4828]: I1129 07:08:11.488701 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:08:11 crc kubenswrapper[4828]: I1129 07:08:11.488779 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93" gracePeriod=600 Nov 29 07:08:12 crc kubenswrapper[4828]: I1129 07:08:12.773346 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:12 crc kubenswrapper[4828]: I1129 07:08:12.779015 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b68d84b74-lxckc" Nov 29 07:08:14 crc kubenswrapper[4828]: I1129 07:08:14.436538 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:08:19 crc kubenswrapper[4828]: I1129 07:08:19.721064 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93" exitCode=0 Nov 29 07:08:19 crc kubenswrapper[4828]: I1129 07:08:19.721314 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93"} Nov 29 07:08:19 crc kubenswrapper[4828]: I1129 07:08:19.721409 4828 scope.go:117] "RemoveContainer" containerID="bac6835165b9e52c5bb88f215aeb3d36e1327f2db7351e2479fffd8e471716be" Nov 29 07:08:20 crc kubenswrapper[4828]: I1129 07:08:20.730134 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vccf" event={"ID":"4766d758-11c2-400b-89fd-4b1de688f74d","Type":"ContainerStarted","Data":"974b0eb8a7b88caed511470c437bfff3c1fd31660f43be0f2c1f688f6b37bf74"} Nov 29 07:08:20 crc kubenswrapper[4828]: I1129 07:08:20.736440 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb"} Nov 29 07:08:20 crc kubenswrapper[4828]: I1129 07:08:20.742518 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9grx" event={"ID":"65a880c4-a44a-4fba-9f14-845905e54799","Type":"ContainerStarted","Data":"ae62e5eed3ebdadd9b53ddc58aebde15870e6899c732be242be821e2b0bd50f9"} Nov 29 07:08:20 crc kubenswrapper[4828]: I1129 07:08:20.754570 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7vccf" podStartSLOduration=4.386258111 podStartE2EDuration="24.754551085s" podCreationTimestamp="2025-11-29 07:07:56 +0000 UTC" firstStartedPulling="2025-11-29 07:07:59.719596567 +0000 UTC m=+419.341672625" lastFinishedPulling="2025-11-29 07:08:20.087889541 +0000 UTC m=+439.709965599" observedRunningTime="2025-11-29 07:08:20.75325803 +0000 UTC m=+440.375334088" watchObservedRunningTime="2025-11-29 07:08:20.754551085 +0000 UTC m=+440.376627143" Nov 29 07:08:20 crc kubenswrapper[4828]: I1129 07:08:20.797763 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9grx" podStartSLOduration=4.410535612 podStartE2EDuration="24.797733561s" podCreationTimestamp="2025-11-29 07:07:56 +0000 UTC" firstStartedPulling="2025-11-29 07:07:59.715174946 +0000 UTC m=+419.337251004" lastFinishedPulling="2025-11-29 07:08:20.102372895 +0000 UTC m=+439.724448953" observedRunningTime="2025-11-29 07:08:20.790508614 +0000 UTC m=+440.412584672" watchObservedRunningTime="2025-11-29 07:08:20.797733561 +0000 UTC m=+440.419809619" Nov 29 07:08:26 crc kubenswrapper[4828]: I1129 07:08:26.781352 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:08:26 crc kubenswrapper[4828]: I1129 07:08:26.781789 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:08:26 crc kubenswrapper[4828]: I1129 07:08:26.819609 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:08:26 crc kubenswrapper[4828]: I1129 07:08:26.982034 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:08:26 crc kubenswrapper[4828]: I1129 07:08:26.982096 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:08:27 crc kubenswrapper[4828]: I1129 07:08:27.020039 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:08:27 crc kubenswrapper[4828]: I1129 07:08:27.825623 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9grx" Nov 29 07:08:27 crc kubenswrapper[4828]: I1129 07:08:27.828837 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7vccf" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.092748 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5npl"] Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.094336 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.118820 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5npl"] Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272190 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-certificates\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272321 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-trusted-ca\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-bound-sa-token\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272381 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272411 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631b02f-30ee-47b3-a602-e10f5f08c3fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272439 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-tls\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272458 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631b02f-30ee-47b3-a602-e10f5f08c3fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.272483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqwx\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-kube-api-access-4zqwx\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.303159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-certificates\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373880 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-trusted-ca\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-bound-sa-token\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373941 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631b02f-30ee-47b3-a602-e10f5f08c3fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373971 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-tls\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.373995 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631b02f-30ee-47b3-a602-e10f5f08c3fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.374013 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqwx\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-kube-api-access-4zqwx\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.374727 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631b02f-30ee-47b3-a602-e10f5f08c3fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.375370 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-certificates\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.376036 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631b02f-30ee-47b3-a602-e10f5f08c3fe-trusted-ca\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.389581 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631b02f-30ee-47b3-a602-e10f5f08c3fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.389632 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-registry-tls\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.393913 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-bound-sa-token\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.394019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqwx\" (UniqueName: \"kubernetes.io/projected/c631b02f-30ee-47b3-a602-e10f5f08c3fe-kube-api-access-4zqwx\") pod \"image-registry-66df7c8f76-q5npl\" (UID: \"c631b02f-30ee-47b3-a602-e10f5f08c3fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.411845 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:38 crc kubenswrapper[4828]: I1129 07:08:38.883565 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5npl"] Nov 29 07:08:38 crc kubenswrapper[4828]: W1129 07:08:38.894140 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc631b02f_30ee_47b3_a602_e10f5f08c3fe.slice/crio-e173c8b8b32e89b2f96fbbb6c05ba3010c826b39473fdd87aa42ac523417faf0 WatchSource:0}: Error finding container e173c8b8b32e89b2f96fbbb6c05ba3010c826b39473fdd87aa42ac523417faf0: Status 404 returned error can't find the container with id e173c8b8b32e89b2f96fbbb6c05ba3010c826b39473fdd87aa42ac523417faf0 Nov 29 07:08:39 crc kubenswrapper[4828]: I1129 07:08:39.886292 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" event={"ID":"c631b02f-30ee-47b3-a602-e10f5f08c3fe","Type":"ContainerStarted","Data":"7a6db65c22f7e96dd00387d35f18615528fe0ed3758abcdfbc6eee7a8ef31de4"} Nov 29 07:08:39 crc kubenswrapper[4828]: I1129 07:08:39.886712 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:39 crc kubenswrapper[4828]: I1129 07:08:39.886735 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" event={"ID":"c631b02f-30ee-47b3-a602-e10f5f08c3fe","Type":"ContainerStarted","Data":"e173c8b8b32e89b2f96fbbb6c05ba3010c826b39473fdd87aa42ac523417faf0"} Nov 29 07:08:39 crc kubenswrapper[4828]: I1129 07:08:39.906839 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" podStartSLOduration=1.906807065 podStartE2EDuration="1.906807065s" podCreationTimestamp="2025-11-29 07:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:39.905122511 +0000 UTC m=+459.527198579" watchObservedRunningTime="2025-11-29 07:08:39.906807065 +0000 UTC m=+459.528883123" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.190117 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.191070 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" podUID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" containerName="controller-manager" containerID="cri-o://1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e" gracePeriod=30 Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.628514 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.727875 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config\") pod \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.727958 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles\") pod \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.728068 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert\") pod \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.728100 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngl9c\" (UniqueName: \"kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c\") pod \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.728171 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca\") pod \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\" (UID: \"ef43c8c6-f342-4966-9ad2-a8fa451e0b02\") " Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.728905 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef43c8c6-f342-4966-9ad2-a8fa451e0b02" (UID: "ef43c8c6-f342-4966-9ad2-a8fa451e0b02"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.728950 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef43c8c6-f342-4966-9ad2-a8fa451e0b02" (UID: "ef43c8c6-f342-4966-9ad2-a8fa451e0b02"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.729069 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config" (OuterVolumeSpecName: "config") pod "ef43c8c6-f342-4966-9ad2-a8fa451e0b02" (UID: "ef43c8c6-f342-4966-9ad2-a8fa451e0b02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.733822 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef43c8c6-f342-4966-9ad2-a8fa451e0b02" (UID: "ef43c8c6-f342-4966-9ad2-a8fa451e0b02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.733976 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c" (OuterVolumeSpecName: "kube-api-access-ngl9c") pod "ef43c8c6-f342-4966-9ad2-a8fa451e0b02" (UID: "ef43c8c6-f342-4966-9ad2-a8fa451e0b02"). InnerVolumeSpecName "kube-api-access-ngl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.829859 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.829896 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngl9c\" (UniqueName: \"kubernetes.io/projected/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-kube-api-access-ngl9c\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.829918 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.829926 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.829934 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef43c8c6-f342-4966-9ad2-a8fa451e0b02-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.899971 4828 generic.go:334] "Generic (PLEG): container finished" podID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" containerID="1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e" exitCode=0 Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.900075 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.900096 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" event={"ID":"ef43c8c6-f342-4966-9ad2-a8fa451e0b02","Type":"ContainerDied","Data":"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e"} Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.900140 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b44b9f79-p7pmt" event={"ID":"ef43c8c6-f342-4966-9ad2-a8fa451e0b02","Type":"ContainerDied","Data":"d2ba122614384eb0c14c940fcaef7cf7c614c0fad9ace948734630eafd82cbcf"} Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.900194 4828 scope.go:117] "RemoveContainer" containerID="1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.924036 4828 scope.go:117] "RemoveContainer" containerID="1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e" Nov 29 07:08:41 crc kubenswrapper[4828]: E1129 07:08:41.924729 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e\": container with ID starting with 1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e not found: ID does not exist" containerID="1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.924797 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e"} err="failed to get container status \"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e\": rpc error: code = NotFound desc = could not find container \"1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e\": container with ID starting with 1d171a843db48239af381d50edbf30416b61f0ae778c1a7ca60384bea8f16e2e not found: ID does not exist" Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.932683 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:41 crc kubenswrapper[4828]: I1129 07:08:41.936840 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59b44b9f79-p7pmt"] Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.418796 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-558694589-g5jfs"] Nov 29 07:08:42 crc kubenswrapper[4828]: E1129 07:08:42.419055 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" containerName="controller-manager" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.419079 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" containerName="controller-manager" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.419217 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" containerName="controller-manager" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.419696 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.423958 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.424359 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.424518 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.424597 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.431619 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.431695 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.434213 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.437080 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558694589-g5jfs"] Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.541398 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-proxy-ca-bundles\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.541660 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acebbab4-fd6a-48d1-8784-0c6be918f113-serving-cert\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.541910 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdgs\" (UniqueName: \"kubernetes.io/projected/acebbab4-fd6a-48d1-8784-0c6be918f113-kube-api-access-rmdgs\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.542045 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-config\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.542312 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-client-ca\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.643011 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmdgs\" (UniqueName: \"kubernetes.io/projected/acebbab4-fd6a-48d1-8784-0c6be918f113-kube-api-access-rmdgs\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.643356 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-config\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.643390 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-client-ca\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.643425 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-proxy-ca-bundles\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.643465 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acebbab4-fd6a-48d1-8784-0c6be918f113-serving-cert\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.644907 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-proxy-ca-bundles\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.645608 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-config\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.646575 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acebbab4-fd6a-48d1-8784-0c6be918f113-client-ca\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.647436 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acebbab4-fd6a-48d1-8784-0c6be918f113-serving-cert\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.659204 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmdgs\" (UniqueName: \"kubernetes.io/projected/acebbab4-fd6a-48d1-8784-0c6be918f113-kube-api-access-rmdgs\") pod \"controller-manager-558694589-g5jfs\" (UID: \"acebbab4-fd6a-48d1-8784-0c6be918f113\") " pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.741711 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:42 crc kubenswrapper[4828]: I1129 07:08:42.939210 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558694589-g5jfs"] Nov 29 07:08:43 crc kubenswrapper[4828]: I1129 07:08:43.419875 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef43c8c6-f342-4966-9ad2-a8fa451e0b02" path="/var/lib/kubelet/pods/ef43c8c6-f342-4966-9ad2-a8fa451e0b02/volumes" Nov 29 07:08:43 crc kubenswrapper[4828]: I1129 07:08:43.923940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" event={"ID":"acebbab4-fd6a-48d1-8784-0c6be918f113","Type":"ContainerStarted","Data":"1de1e0f1c683c83c540e577ddccf58cd97653c68d42451495a01544062849161"} Nov 29 07:08:44 crc kubenswrapper[4828]: I1129 07:08:44.930660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" event={"ID":"acebbab4-fd6a-48d1-8784-0c6be918f113","Type":"ContainerStarted","Data":"e53686ef66aee1a58630ec64865d2db7293bb7669aa859bc03032629ee07da44"} Nov 29 07:08:44 crc kubenswrapper[4828]: I1129 07:08:44.931173 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:44 crc kubenswrapper[4828]: I1129 07:08:44.935771 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" Nov 29 07:08:44 crc kubenswrapper[4828]: I1129 07:08:44.948876 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-558694589-g5jfs" podStartSLOduration=3.948850502 podStartE2EDuration="3.948850502s" podCreationTimestamp="2025-11-29 07:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:44.945595868 +0000 UTC m=+464.567671936" watchObservedRunningTime="2025-11-29 07:08:44.948850502 +0000 UTC m=+464.570926560" Nov 29 07:08:58 crc kubenswrapper[4828]: I1129 07:08:58.419456 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-q5npl" Nov 29 07:08:58 crc kubenswrapper[4828]: I1129 07:08:58.482520 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.064105 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" podUID="9d23e223-6e12-45ff-80b3-1e65d6c36960" containerName="registry" containerID="cri-o://16af16523b2e021d8e0ac669303d8baef3e00a8da46bde953f071a96f832c842" gracePeriod=30 Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.200155 4828 generic.go:334] "Generic (PLEG): container finished" podID="9d23e223-6e12-45ff-80b3-1e65d6c36960" containerID="16af16523b2e021d8e0ac669303d8baef3e00a8da46bde953f071a96f832c842" exitCode=0 Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.200319 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" event={"ID":"9d23e223-6e12-45ff-80b3-1e65d6c36960","Type":"ContainerDied","Data":"16af16523b2e021d8e0ac669303d8baef3e00a8da46bde953f071a96f832c842"} Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.536451 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.643730 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.643993 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644042 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644125 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644203 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644230 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mdd\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644326 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.644369 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates\") pod \"9d23e223-6e12-45ff-80b3-1e65d6c36960\" (UID: \"9d23e223-6e12-45ff-80b3-1e65d6c36960\") " Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.645583 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.645584 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.649930 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.650760 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.650795 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.652919 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd" (OuterVolumeSpecName: "kube-api-access-v9mdd") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "kube-api-access-v9mdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.653831 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.667101 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9d23e223-6e12-45ff-80b3-1e65d6c36960" (UID: "9d23e223-6e12-45ff-80b3-1e65d6c36960"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.745956 4828 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d23e223-6e12-45ff-80b3-1e65d6c36960-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746009 4828 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746023 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746033 4828 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746042 4828 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e223-6e12-45ff-80b3-1e65d6c36960-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746051 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d23e223-6e12-45ff-80b3-1e65d6c36960-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:24 crc kubenswrapper[4828]: I1129 07:09:24.746060 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mdd\" (UniqueName: \"kubernetes.io/projected/9d23e223-6e12-45ff-80b3-1e65d6c36960-kube-api-access-v9mdd\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.220746 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" event={"ID":"9d23e223-6e12-45ff-80b3-1e65d6c36960","Type":"ContainerDied","Data":"119931424c2ef8abaad1b97730953b233a5bdfd2a34382b960ffe6c1a749ea2d"} Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.220969 4828 scope.go:117] "RemoveContainer" containerID="16af16523b2e021d8e0ac669303d8baef3e00a8da46bde953f071a96f832c842" Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.221227 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6p6v" Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.252128 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.260755 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6p6v"] Nov 29 07:09:25 crc kubenswrapper[4828]: I1129 07:09:25.420907 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d23e223-6e12-45ff-80b3-1e65d6c36960" path="/var/lib/kubelet/pods/9d23e223-6e12-45ff-80b3-1e65d6c36960/volumes" Nov 29 07:10:41 crc kubenswrapper[4828]: I1129 07:10:41.487232 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:10:41 crc kubenswrapper[4828]: I1129 07:10:41.487886 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:11:11 crc kubenswrapper[4828]: I1129 07:11:11.487240 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:11:11 crc kubenswrapper[4828]: I1129 07:11:11.487883 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.486644 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.487527 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.487642 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.488454 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.488532 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb" gracePeriod=600 Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.995490 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb" exitCode=0 Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.995812 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb"} Nov 29 07:11:41 crc kubenswrapper[4828]: I1129 07:11:41.995957 4828 scope.go:117] "RemoveContainer" containerID="de5680b47e332c14b381bb72b4ac2148493c666a12254a81b7fa5d8120a5bb93" Nov 29 07:11:43 crc kubenswrapper[4828]: I1129 07:11:43.003114 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b"} Nov 29 07:13:51 crc kubenswrapper[4828]: I1129 07:13:51.553216 4828 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.943642 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-57drp"] Nov 29 07:14:05 crc kubenswrapper[4828]: E1129 07:14:05.944329 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23e223-6e12-45ff-80b3-1e65d6c36960" containerName="registry" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.944365 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23e223-6e12-45ff-80b3-1e65d6c36960" containerName="registry" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.944567 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d23e223-6e12-45ff-80b3-1e65d6c36960" containerName="registry" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.945209 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.948754 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.948888 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nlj6x" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.949018 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.967924 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-57drp"] Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.972924 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g2ms6"] Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.973791 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g2ms6" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.975915 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-j98tl" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.978695 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9vbgx"] Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.984645 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.986762 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7bm56" Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.986953 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9vbgx"] Nov 29 07:14:05 crc kubenswrapper[4828]: I1129 07:14:05.996370 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g2ms6"] Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.137645 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzht7\" (UniqueName: \"kubernetes.io/projected/eb60407c-21f0-49e3-87b6-dca32ff366b6-kube-api-access-nzht7\") pod \"cert-manager-cainjector-7f985d654d-57drp\" (UID: \"eb60407c-21f0-49e3-87b6-dca32ff366b6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.137742 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfs94\" (UniqueName: \"kubernetes.io/projected/dd2e3aba-f27e-4366-a84e-ed3de11ab39a-kube-api-access-gfs94\") pod \"cert-manager-webhook-5655c58dd6-9vbgx\" (UID: \"dd2e3aba-f27e-4366-a84e-ed3de11ab39a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.137776 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqh4\" (UniqueName: \"kubernetes.io/projected/f01b92db-d046-4b1c-a23a-84250830a957-kube-api-access-clqh4\") pod \"cert-manager-5b446d88c5-g2ms6\" (UID: \"f01b92db-d046-4b1c-a23a-84250830a957\") " pod="cert-manager/cert-manager-5b446d88c5-g2ms6" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.239478 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfs94\" (UniqueName: \"kubernetes.io/projected/dd2e3aba-f27e-4366-a84e-ed3de11ab39a-kube-api-access-gfs94\") pod \"cert-manager-webhook-5655c58dd6-9vbgx\" (UID: \"dd2e3aba-f27e-4366-a84e-ed3de11ab39a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.239768 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqh4\" (UniqueName: \"kubernetes.io/projected/f01b92db-d046-4b1c-a23a-84250830a957-kube-api-access-clqh4\") pod \"cert-manager-5b446d88c5-g2ms6\" (UID: \"f01b92db-d046-4b1c-a23a-84250830a957\") " pod="cert-manager/cert-manager-5b446d88c5-g2ms6" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.239868 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzht7\" (UniqueName: \"kubernetes.io/projected/eb60407c-21f0-49e3-87b6-dca32ff366b6-kube-api-access-nzht7\") pod \"cert-manager-cainjector-7f985d654d-57drp\" (UID: \"eb60407c-21f0-49e3-87b6-dca32ff366b6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.260214 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzht7\" (UniqueName: \"kubernetes.io/projected/eb60407c-21f0-49e3-87b6-dca32ff366b6-kube-api-access-nzht7\") pod \"cert-manager-cainjector-7f985d654d-57drp\" (UID: \"eb60407c-21f0-49e3-87b6-dca32ff366b6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.262974 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqh4\" (UniqueName: \"kubernetes.io/projected/f01b92db-d046-4b1c-a23a-84250830a957-kube-api-access-clqh4\") pod \"cert-manager-5b446d88c5-g2ms6\" (UID: \"f01b92db-d046-4b1c-a23a-84250830a957\") " pod="cert-manager/cert-manager-5b446d88c5-g2ms6" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.264806 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.294143 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfs94\" (UniqueName: \"kubernetes.io/projected/dd2e3aba-f27e-4366-a84e-ed3de11ab39a-kube-api-access-gfs94\") pod \"cert-manager-webhook-5655c58dd6-9vbgx\" (UID: \"dd2e3aba-f27e-4366-a84e-ed3de11ab39a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.296719 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g2ms6" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.307370 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.505413 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-57drp"] Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.518167 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.752608 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9vbgx"] Nov 29 07:14:06 crc kubenswrapper[4828]: W1129 07:14:06.755409 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd2e3aba_f27e_4366_a84e_ed3de11ab39a.slice/crio-38923466285e7325440d5f6b5f8caa9a7a1cfb1780da6a08d537d29df7bef2a0 WatchSource:0}: Error finding container 38923466285e7325440d5f6b5f8caa9a7a1cfb1780da6a08d537d29df7bef2a0: Status 404 returned error can't find the container with id 38923466285e7325440d5f6b5f8caa9a7a1cfb1780da6a08d537d29df7bef2a0 Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.763290 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g2ms6"] Nov 29 07:14:06 crc kubenswrapper[4828]: W1129 07:14:06.773809 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b92db_d046_4b1c_a23a_84250830a957.slice/crio-a02833a640559ee544da09509c15d245addfefc29ece5e29cc30f42bafdbed71 WatchSource:0}: Error finding container a02833a640559ee544da09509c15d245addfefc29ece5e29cc30f42bafdbed71: Status 404 returned error can't find the container with id a02833a640559ee544da09509c15d245addfefc29ece5e29cc30f42bafdbed71 Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.831448 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g2ms6" event={"ID":"f01b92db-d046-4b1c-a23a-84250830a957","Type":"ContainerStarted","Data":"a02833a640559ee544da09509c15d245addfefc29ece5e29cc30f42bafdbed71"} Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.832493 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" event={"ID":"dd2e3aba-f27e-4366-a84e-ed3de11ab39a","Type":"ContainerStarted","Data":"38923466285e7325440d5f6b5f8caa9a7a1cfb1780da6a08d537d29df7bef2a0"} Nov 29 07:14:06 crc kubenswrapper[4828]: I1129 07:14:06.833649 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" event={"ID":"eb60407c-21f0-49e3-87b6-dca32ff366b6","Type":"ContainerStarted","Data":"c7c6b7141c68e1dbc26c0c6e2ec751da67002498108aa766aa384a890909b31e"} Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.486604 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.487215 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.879521 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" event={"ID":"dd2e3aba-f27e-4366-a84e-ed3de11ab39a","Type":"ContainerStarted","Data":"5dceda3f15ac64d894403c9b1c1ab08c787730c7d383f37d3aa45874a3b5fd43"} Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.879629 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.880866 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" event={"ID":"eb60407c-21f0-49e3-87b6-dca32ff366b6","Type":"ContainerStarted","Data":"ed54add229485692987bc908c64cab498e1093149b0044631095189f44795a46"} Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.882460 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g2ms6" event={"ID":"f01b92db-d046-4b1c-a23a-84250830a957","Type":"ContainerStarted","Data":"d9bd816b43937512385216088758cebb95ee22f4b0bcf2974c0bb1109bfdab8e"} Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.896441 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" podStartSLOduration=2.039169855 podStartE2EDuration="6.896403742s" podCreationTimestamp="2025-11-29 07:14:05 +0000 UTC" firstStartedPulling="2025-11-29 07:14:06.757750589 +0000 UTC m=+786.379826647" lastFinishedPulling="2025-11-29 07:14:11.614984476 +0000 UTC m=+791.237060534" observedRunningTime="2025-11-29 07:14:11.892908223 +0000 UTC m=+791.514984301" watchObservedRunningTime="2025-11-29 07:14:11.896403742 +0000 UTC m=+791.518479800" Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.912873 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-g2ms6" podStartSLOduration=2.158545297 podStartE2EDuration="6.912851368s" podCreationTimestamp="2025-11-29 07:14:05 +0000 UTC" firstStartedPulling="2025-11-29 07:14:06.776871553 +0000 UTC m=+786.398947611" lastFinishedPulling="2025-11-29 07:14:11.531177624 +0000 UTC m=+791.153253682" observedRunningTime="2025-11-29 07:14:11.910612761 +0000 UTC m=+791.532688839" watchObservedRunningTime="2025-11-29 07:14:11.912851368 +0000 UTC m=+791.534927426" Nov 29 07:14:11 crc kubenswrapper[4828]: I1129 07:14:11.967447 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-57drp" podStartSLOduration=1.953393932 podStartE2EDuration="6.96742749s" podCreationTimestamp="2025-11-29 07:14:05 +0000 UTC" firstStartedPulling="2025-11-29 07:14:06.517857745 +0000 UTC m=+786.139933833" lastFinishedPulling="2025-11-29 07:14:11.531891333 +0000 UTC m=+791.153967391" observedRunningTime="2025-11-29 07:14:11.963748607 +0000 UTC m=+791.585824675" watchObservedRunningTime="2025-11-29 07:14:11.96742749 +0000 UTC m=+791.589503548" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.133586 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-49f6l"] Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.134418 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-controller" containerID="cri-o://6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.134892 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="sbdb" containerID="cri-o://ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.134944 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="nbdb" containerID="cri-o://be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.134984 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="northd" containerID="cri-o://d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.135022 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.135050 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-node" containerID="cri-o://658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.135077 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-acl-logging" containerID="cri-o://f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.183083 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" containerID="cri-o://89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" gracePeriod=30 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.310120 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-9vbgx" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.501088 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/3.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.504031 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovn-acl-logging/0.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.504566 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovn-controller/0.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.505163 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.560567 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2vrx2"] Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561169 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="nbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.561296 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="nbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561384 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kubecfg-setup" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.561477 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kubecfg-setup" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561546 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.561604 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561657 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.561704 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561752 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-acl-logging" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.561892 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-acl-logging" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.561947 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="sbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562046 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="sbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.562134 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562201 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.562258 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562469 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.562544 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="northd" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562598 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="northd" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.562652 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562701 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.562751 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-node" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.562802 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-node" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563019 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="nbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563086 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="sbdb" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563141 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="northd" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563197 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563289 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563362 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563430 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563498 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563551 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-node" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563604 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovn-acl-logging" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563654 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563701 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.563860 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.563925 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: E1129 07:14:16.563996 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.564072 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerName="ovnkube-controller" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.566174 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569011 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569047 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569071 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569093 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569107 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569467 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569508 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569524 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569545 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569577 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569597 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rk2h\" (UniqueName: \"kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569614 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569632 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569665 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569704 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569786 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569829 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569857 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569884 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569915 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch\") pod \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\" (UID: \"c273b031-d4b1-480a-9dd1-e26ed759c8a0\") " Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569923 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log" (OuterVolumeSpecName: "node-log") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569921 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569949 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569967 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash" (OuterVolumeSpecName: "host-slash") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.569413 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570110 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570139 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570162 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570575 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570619 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570588 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570636 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570660 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket" (OuterVolumeSpecName: "log-socket") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.570689 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.571003 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.571036 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.571192 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.575943 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h" (OuterVolumeSpecName: "kube-api-access-4rk2h") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "kube-api-access-4rk2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.576627 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.586239 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c273b031-d4b1-480a-9dd1-e26ed759c8a0" (UID: "c273b031-d4b1-480a-9dd1-e26ed759c8a0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.671436 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-var-lib-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.671811 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-netns\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.671924 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-kubelet\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672075 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-node-log\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672173 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-bin\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672261 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65b93e17-af16-40ef-ac16-c4120b5775ae-ovn-node-metrics-cert\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672373 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-config\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672470 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-slash\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672554 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672635 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-env-overrides\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672800 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-log-socket\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672898 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.672997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-script-lib\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673101 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-ovn\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673193 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-systemd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673295 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673423 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-netd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673526 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8d7\" (UniqueName: \"kubernetes.io/projected/65b93e17-af16-40ef-ac16-c4120b5775ae-kube-api-access-wh8d7\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673612 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-systemd-units\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673700 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-etc-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673820 4828 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-log-socket\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673883 4828 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.673945 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rk2h\" (UniqueName: \"kubernetes.io/projected/c273b031-d4b1-480a-9dd1-e26ed759c8a0-kube-api-access-4rk2h\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674015 4828 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674077 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674130 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674182 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c273b031-d4b1-480a-9dd1-e26ed759c8a0-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674240 4828 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674317 4828 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674375 4828 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674439 4828 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674496 4828 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-slash\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674552 4828 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674609 4828 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674702 4828 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674763 4828 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674823 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c273b031-d4b1-480a-9dd1-e26ed759c8a0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674883 4828 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-node-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.674976 4828 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.675042 4828 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c273b031-d4b1-480a-9dd1-e26ed759c8a0-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.776476 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-netd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.776893 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8d7\" (UniqueName: \"kubernetes.io/projected/65b93e17-af16-40ef-ac16-c4120b5775ae-kube-api-access-wh8d7\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777003 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-systemd-units\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777097 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-etc-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777203 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-var-lib-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777296 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-var-lib-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777132 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-etc-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777097 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-systemd-units\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777319 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-netns\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.776691 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-netd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777433 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-kubelet\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777500 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-node-log\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777525 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-bin\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777551 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-kubelet\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65b93e17-af16-40ef-ac16-c4120b5775ae-ovn-node-metrics-cert\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-config\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-node-log\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777691 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-slash\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777801 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-env-overrides\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777832 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-log-socket\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777878 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-script-lib\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777952 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-ovn\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777990 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-systemd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778029 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-openvswitch\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.777717 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-slash\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778187 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778168 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-cni-bin\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-ovn\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778492 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-log-socket\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778523 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-run-systemd\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778583 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-config\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778592 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.778847 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65b93e17-af16-40ef-ac16-c4120b5775ae-host-run-netns\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.779064 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-ovnkube-script-lib\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.779739 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65b93e17-af16-40ef-ac16-c4120b5775ae-env-overrides\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.781310 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65b93e17-af16-40ef-ac16-c4120b5775ae-ovn-node-metrics-cert\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.796495 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8d7\" (UniqueName: \"kubernetes.io/projected/65b93e17-af16-40ef-ac16-c4120b5775ae-kube-api-access-wh8d7\") pod \"ovnkube-node-2vrx2\" (UID: \"65b93e17-af16-40ef-ac16-c4120b5775ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.884260 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.914782 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"5cbab6bd756658ce25c2945aff25c88eafe42d8ff2794b2a1f1dfd564726aa28"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.917122 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovnkube-controller/3.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.919567 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovn-acl-logging/0.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.920315 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-49f6l_c273b031-d4b1-480a-9dd1-e26ed759c8a0/ovn-controller/0.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921376 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921422 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921462 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921402 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921431 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921555 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921554 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921589 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921598 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921605 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" exitCode=0 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921612 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" exitCode=143 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921619 4828 generic.go:334] "Generic (PLEG): container finished" podID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" exitCode=143 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921893 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921908 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921949 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921976 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921984 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.921993 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922000 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922010 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922017 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922026 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922034 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922045 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922058 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922067 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922075 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922083 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922092 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922101 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922110 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922119 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922127 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922135 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922160 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922171 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922180 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922189 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922198 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922206 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922215 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922224 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922230 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922237 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922247 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-49f6l" event={"ID":"c273b031-d4b1-480a-9dd1-e26ed759c8a0","Type":"ContainerDied","Data":"e45c516e2b97514c9623ddfea8e7dd6e12e280e2e55e1d07dd88fdf4101cefc3"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922285 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922297 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922305 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922313 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922319 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922326 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922332 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922339 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922346 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.922354 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.926123 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/2.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.926818 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/1.log" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.926864 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3a37050-181c-42b4-acf9-dc458a0f5bcf" containerID="0ce01932a55d625ed624dfad578fd1a946c7ae87a5964106d755917f0c7ab53d" exitCode=2 Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.926902 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerDied","Data":"0ce01932a55d625ed624dfad578fd1a946c7ae87a5964106d755917f0c7ab53d"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.926931 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20"} Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.927666 4828 scope.go:117] "RemoveContainer" containerID="0ce01932a55d625ed624dfad578fd1a946c7ae87a5964106d755917f0c7ab53d" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.942364 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.986150 4828 scope.go:117] "RemoveContainer" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.988025 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-49f6l"] Nov 29 07:14:16 crc kubenswrapper[4828]: I1129 07:14:16.991362 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-49f6l"] Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.021709 4828 scope.go:117] "RemoveContainer" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.043317 4828 scope.go:117] "RemoveContainer" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.060709 4828 scope.go:117] "RemoveContainer" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.079341 4828 scope.go:117] "RemoveContainer" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.129136 4828 scope.go:117] "RemoveContainer" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.153485 4828 scope.go:117] "RemoveContainer" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.174788 4828 scope.go:117] "RemoveContainer" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.193067 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.193797 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.193855 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} err="failed to get container status \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.193883 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.194246 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": container with ID starting with e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b not found: ID does not exist" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.194317 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} err="failed to get container status \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": rpc error: code = NotFound desc = could not find container \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": container with ID starting with e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.194338 4828 scope.go:117] "RemoveContainer" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.194644 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": container with ID starting with ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb not found: ID does not exist" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.194671 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} err="failed to get container status \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": rpc error: code = NotFound desc = could not find container \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": container with ID starting with ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.194701 4828 scope.go:117] "RemoveContainer" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.194948 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": container with ID starting with be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc not found: ID does not exist" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.194982 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} err="failed to get container status \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": rpc error: code = NotFound desc = could not find container \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": container with ID starting with be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.195000 4828 scope.go:117] "RemoveContainer" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.195221 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": container with ID starting with d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4 not found: ID does not exist" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.195293 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} err="failed to get container status \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": rpc error: code = NotFound desc = could not find container \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": container with ID starting with d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.195312 4828 scope.go:117] "RemoveContainer" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.195761 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": container with ID starting with f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5 not found: ID does not exist" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.195787 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} err="failed to get container status \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": rpc error: code = NotFound desc = could not find container \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": container with ID starting with f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.195799 4828 scope.go:117] "RemoveContainer" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.196030 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": container with ID starting with 658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39 not found: ID does not exist" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196056 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} err="failed to get container status \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": rpc error: code = NotFound desc = could not find container \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": container with ID starting with 658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196072 4828 scope.go:117] "RemoveContainer" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.196364 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": container with ID starting with f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e not found: ID does not exist" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196391 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} err="failed to get container status \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": rpc error: code = NotFound desc = could not find container \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": container with ID starting with f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196407 4828 scope.go:117] "RemoveContainer" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.196831 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": container with ID starting with 6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0 not found: ID does not exist" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196858 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} err="failed to get container status \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": rpc error: code = NotFound desc = could not find container \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": container with ID starting with 6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.196875 4828 scope.go:117] "RemoveContainer" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: E1129 07:14:17.197253 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": container with ID starting with 83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee not found: ID does not exist" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.197577 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} err="failed to get container status \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": rpc error: code = NotFound desc = could not find container \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": container with ID starting with 83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.197599 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.197928 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} err="failed to get container status \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.197956 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.198224 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} err="failed to get container status \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": rpc error: code = NotFound desc = could not find container \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": container with ID starting with e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.198462 4828 scope.go:117] "RemoveContainer" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.198810 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} err="failed to get container status \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": rpc error: code = NotFound desc = could not find container \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": container with ID starting with ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.198839 4828 scope.go:117] "RemoveContainer" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.199595 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} err="failed to get container status \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": rpc error: code = NotFound desc = could not find container \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": container with ID starting with be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.199654 4828 scope.go:117] "RemoveContainer" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.199975 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} err="failed to get container status \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": rpc error: code = NotFound desc = could not find container \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": container with ID starting with d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.200011 4828 scope.go:117] "RemoveContainer" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.200528 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} err="failed to get container status \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": rpc error: code = NotFound desc = could not find container \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": container with ID starting with f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.200557 4828 scope.go:117] "RemoveContainer" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.200827 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} err="failed to get container status \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": rpc error: code = NotFound desc = could not find container \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": container with ID starting with 658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.200852 4828 scope.go:117] "RemoveContainer" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.201159 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} err="failed to get container status \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": rpc error: code = NotFound desc = could not find container \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": container with ID starting with f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.201182 4828 scope.go:117] "RemoveContainer" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.201452 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} err="failed to get container status \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": rpc error: code = NotFound desc = could not find container \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": container with ID starting with 6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.201484 4828 scope.go:117] "RemoveContainer" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.207350 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} err="failed to get container status \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": rpc error: code = NotFound desc = could not find container \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": container with ID starting with 83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.207412 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.207825 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} err="failed to get container status \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.207848 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208070 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} err="failed to get container status \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": rpc error: code = NotFound desc = could not find container \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": container with ID starting with e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208092 4828 scope.go:117] "RemoveContainer" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208311 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} err="failed to get container status \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": rpc error: code = NotFound desc = could not find container \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": container with ID starting with ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208338 4828 scope.go:117] "RemoveContainer" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208620 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} err="failed to get container status \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": rpc error: code = NotFound desc = could not find container \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": container with ID starting with be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208648 4828 scope.go:117] "RemoveContainer" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208857 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} err="failed to get container status \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": rpc error: code = NotFound desc = could not find container \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": container with ID starting with d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.208881 4828 scope.go:117] "RemoveContainer" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209167 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} err="failed to get container status \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": rpc error: code = NotFound desc = could not find container \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": container with ID starting with f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209196 4828 scope.go:117] "RemoveContainer" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209454 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} err="failed to get container status \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": rpc error: code = NotFound desc = could not find container \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": container with ID starting with 658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209477 4828 scope.go:117] "RemoveContainer" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209725 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} err="failed to get container status \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": rpc error: code = NotFound desc = could not find container \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": container with ID starting with f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209748 4828 scope.go:117] "RemoveContainer" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209939 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} err="failed to get container status \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": rpc error: code = NotFound desc = could not find container \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": container with ID starting with 6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.209957 4828 scope.go:117] "RemoveContainer" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210138 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} err="failed to get container status \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": rpc error: code = NotFound desc = could not find container \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": container with ID starting with 83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210158 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210416 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} err="failed to get container status \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210435 4828 scope.go:117] "RemoveContainer" containerID="e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210624 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b"} err="failed to get container status \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": rpc error: code = NotFound desc = could not find container \"e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b\": container with ID starting with e60cbf292507325533765e278d4f3ad2b92aeffb809ceaa57ac2461849bce99b not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210647 4828 scope.go:117] "RemoveContainer" containerID="ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210826 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb"} err="failed to get container status \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": rpc error: code = NotFound desc = could not find container \"ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb\": container with ID starting with ccf1007ec3d196d70b397b5c0e4ae48387ff78cb512f14a26ccfde788f7df7cb not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.210845 4828 scope.go:117] "RemoveContainer" containerID="be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211034 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc"} err="failed to get container status \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": rpc error: code = NotFound desc = could not find container \"be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc\": container with ID starting with be81786aa4d8be5f838e2df2d264104caaac9078d8bb677c7da3c46d23f777cc not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211061 4828 scope.go:117] "RemoveContainer" containerID="d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211668 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4"} err="failed to get container status \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": rpc error: code = NotFound desc = could not find container \"d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4\": container with ID starting with d57fa044a2eca6f5896449a77858b9c5bd812fb1d9c3fd7a7be2b779f2ab70c4 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211690 4828 scope.go:117] "RemoveContainer" containerID="f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211881 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5"} err="failed to get container status \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": rpc error: code = NotFound desc = could not find container \"f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5\": container with ID starting with f867076215340d73de99772daf5a3b0e947cef04ffe16695f18af16f33c2eaf5 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.211906 4828 scope.go:117] "RemoveContainer" containerID="658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212140 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39"} err="failed to get container status \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": rpc error: code = NotFound desc = could not find container \"658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39\": container with ID starting with 658f05587a361315009a2052d58e1c4401088129d5c8d300b1b7fe6546c10f39 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212164 4828 scope.go:117] "RemoveContainer" containerID="f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212356 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e"} err="failed to get container status \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": rpc error: code = NotFound desc = could not find container \"f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e\": container with ID starting with f73ddc673317fe725a390ffdaf0df8256d892cdccb045251016d86ab59c1a73e not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212381 4828 scope.go:117] "RemoveContainer" containerID="6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212549 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0"} err="failed to get container status \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": rpc error: code = NotFound desc = could not find container \"6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0\": container with ID starting with 6ec2d8701284faca2d2d87a8d76b739c41431ccecf665c329f20be328275ead0 not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212570 4828 scope.go:117] "RemoveContainer" containerID="83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212754 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee"} err="failed to get container status \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": rpc error: code = NotFound desc = could not find container \"83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee\": container with ID starting with 83c6f9b5f6b6200044f0dac3dbd2ef82fe43564f2484453c48a9ffacfdd303ee not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.212780 4828 scope.go:117] "RemoveContainer" containerID="89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.213122 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa"} err="failed to get container status \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": rpc error: code = NotFound desc = could not find container \"89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa\": container with ID starting with 89cde4188dd7302d0f3e5f79518e95ac7688eb9a1fb8e4656e13a0da680171aa not found: ID does not exist" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.420245 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c273b031-d4b1-480a-9dd1-e26ed759c8a0" path="/var/lib/kubelet/pods/c273b031-d4b1-480a-9dd1-e26ed759c8a0/volumes" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.937298 4828 generic.go:334] "Generic (PLEG): container finished" podID="65b93e17-af16-40ef-ac16-c4120b5775ae" containerID="f8409d13896b4fbaac1c1bfde59f6c1f3fb38aaa5f432072861519ed84e8cfe0" exitCode=0 Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.937393 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerDied","Data":"f8409d13896b4fbaac1c1bfde59f6c1f3fb38aaa5f432072861519ed84e8cfe0"} Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.943067 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/2.log" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.943684 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/1.log" Nov 29 07:14:17 crc kubenswrapper[4828]: I1129 07:14:17.943757 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qfj9g" event={"ID":"b3a37050-181c-42b4-acf9-dc458a0f5bcf","Type":"ContainerStarted","Data":"45899bf4216b6c9facebf55d21cbf5f4dfe8ef6908b8dbcfca54256db27c7712"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.953860 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"39f2ee61369ad96d47ed99d0679b1fdda740ba7e03cac1e6413e7a640f3827b0"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.954227 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"334d9c0742b5735b1db2eace8acb2802a89cd9d6d3341d43ea97d997a94f2a3d"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.954243 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"502b0864c82028116236f58cdc6b8d7eefa269a59001cab37fcc1780f65eb285"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.954254 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"b7365172da272f40ac425602795fb53d0709221b1dba66d99593e0ef52085613"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.954264 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"0fa19ce1e24c5b76b93911dc021fb711977b1c519289d841d64b3c63cf1b3aa9"} Nov 29 07:14:18 crc kubenswrapper[4828]: I1129 07:14:18.954374 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"781b942422bf39d1889c6aa68d1ab8b5b586bae407c7e83803822319161a9896"} Nov 29 07:14:20 crc kubenswrapper[4828]: I1129 07:14:20.973493 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"a7cfd732b8de163780211343bc75bddcecd66af4f2d3b208d52798f081b037b3"} Nov 29 07:14:23 crc kubenswrapper[4828]: I1129 07:14:23.995196 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" event={"ID":"65b93e17-af16-40ef-ac16-c4120b5775ae","Type":"ContainerStarted","Data":"319f30997d57a7f46fbd7993c6e86d56b71b6ae4c63de588cab292e7c82429be"} Nov 29 07:14:23 crc kubenswrapper[4828]: I1129 07:14:23.995819 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:23 crc kubenswrapper[4828]: I1129 07:14:23.995837 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:24 crc kubenswrapper[4828]: I1129 07:14:24.025845 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:24 crc kubenswrapper[4828]: I1129 07:14:24.034021 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" podStartSLOduration=8.033991458 podStartE2EDuration="8.033991458s" podCreationTimestamp="2025-11-29 07:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:14:24.028282284 +0000 UTC m=+803.650358362" watchObservedRunningTime="2025-11-29 07:14:24.033991458 +0000 UTC m=+803.656067516" Nov 29 07:14:25 crc kubenswrapper[4828]: I1129 07:14:25.001115 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:25 crc kubenswrapper[4828]: I1129 07:14:25.027657 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:41 crc kubenswrapper[4828]: I1129 07:14:41.486857 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:14:41 crc kubenswrapper[4828]: I1129 07:14:41.487532 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:14:46 crc kubenswrapper[4828]: I1129 07:14:46.905853 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2vrx2" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.693219 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn"] Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.695434 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.704833 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.737512 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn"] Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.788982 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.789062 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.789110 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.890501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.890594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.890658 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.891303 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.891630 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:14:59 crc kubenswrapper[4828]: I1129 07:14:59.921772 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.021845 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.172664 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56"] Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.174114 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.178716 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.179624 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.192150 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56"] Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.263943 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn"] Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.296870 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n7b\" (UniqueName: \"kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.296958 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.296992 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.398361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92n7b\" (UniqueName: \"kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.398427 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.398465 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.399622 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.405061 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.417363 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n7b\" (UniqueName: \"kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b\") pod \"collect-profiles-29406675-2hg56\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.504413 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:00 crc kubenswrapper[4828]: I1129 07:15:00.751588 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56"] Nov 29 07:15:00 crc kubenswrapper[4828]: W1129 07:15:00.783018 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a148e4_21ef_4210_9d9a_592a9f5a663c.slice/crio-e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3 WatchSource:0}: Error finding container e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3: Status 404 returned error can't find the container with id e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3 Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.217803 4828 generic.go:334] "Generic (PLEG): container finished" podID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerID="f152c26cf503c9efcd974d5ddb2c792691f4b92774c8c65e06f80fe75b288aff" exitCode=0 Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.217880 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" event={"ID":"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2","Type":"ContainerDied","Data":"f152c26cf503c9efcd974d5ddb2c792691f4b92774c8c65e06f80fe75b288aff"} Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.217909 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" event={"ID":"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2","Type":"ContainerStarted","Data":"b4dcc49b5d12d3898c282b32ba8028347608304c5a40bffa2dbbdc0cc776d639"} Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.223784 4828 generic.go:334] "Generic (PLEG): container finished" podID="23a148e4-21ef-4210-9d9a-592a9f5a663c" containerID="514102033b9802fc6930d884788b91641b27e5e68d75e484cc5ce8303272e5b7" exitCode=0 Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.223838 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" event={"ID":"23a148e4-21ef-4210-9d9a-592a9f5a663c","Type":"ContainerDied","Data":"514102033b9802fc6930d884788b91641b27e5e68d75e484cc5ce8303272e5b7"} Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.223869 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" event={"ID":"23a148e4-21ef-4210-9d9a-592a9f5a663c","Type":"ContainerStarted","Data":"e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3"} Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.805446 4828 scope.go:117] "RemoveContainer" containerID="81e401d8d8b9c29ed3c24f7d6ee85cfc2e3efb02fca9b0351436815dd1676c20" Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.839325 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.841070 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:01 crc kubenswrapper[4828]: I1129 07:15:01.940022 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.044000 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.044192 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.044223 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlckr\" (UniqueName: \"kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.145824 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.145896 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.145915 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlckr\" (UniqueName: \"kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.146735 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.146945 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.166594 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlckr\" (UniqueName: \"kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr\") pod \"redhat-operators-lzqbk\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.235948 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qfj9g_b3a37050-181c-42b4-acf9-dc458a0f5bcf/kube-multus/2.log" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.255687 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.484790 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:02 crc kubenswrapper[4828]: W1129 07:15:02.492645 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2bd5073_3724_4d31_a7bf_dc2f5faa090d.slice/crio-7fe6316e524dbf363b4c8a51e1b78ca98c1708205c2f33bd35c7cba8909c4131 WatchSource:0}: Error finding container 7fe6316e524dbf363b4c8a51e1b78ca98c1708205c2f33bd35c7cba8909c4131: Status 404 returned error can't find the container with id 7fe6316e524dbf363b4c8a51e1b78ca98c1708205c2f33bd35c7cba8909c4131 Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.493548 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.651172 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92n7b\" (UniqueName: \"kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b\") pod \"23a148e4-21ef-4210-9d9a-592a9f5a663c\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.651604 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume\") pod \"23a148e4-21ef-4210-9d9a-592a9f5a663c\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.651679 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume\") pod \"23a148e4-21ef-4210-9d9a-592a9f5a663c\" (UID: \"23a148e4-21ef-4210-9d9a-592a9f5a663c\") " Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.652243 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume" (OuterVolumeSpecName: "config-volume") pod "23a148e4-21ef-4210-9d9a-592a9f5a663c" (UID: "23a148e4-21ef-4210-9d9a-592a9f5a663c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.658511 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23a148e4-21ef-4210-9d9a-592a9f5a663c" (UID: "23a148e4-21ef-4210-9d9a-592a9f5a663c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.659389 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b" (OuterVolumeSpecName: "kube-api-access-92n7b") pod "23a148e4-21ef-4210-9d9a-592a9f5a663c" (UID: "23a148e4-21ef-4210-9d9a-592a9f5a663c"). InnerVolumeSpecName "kube-api-access-92n7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.753511 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a148e4-21ef-4210-9d9a-592a9f5a663c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.753556 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a148e4-21ef-4210-9d9a-592a9f5a663c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:02 crc kubenswrapper[4828]: I1129 07:15:02.753567 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92n7b\" (UniqueName: \"kubernetes.io/projected/23a148e4-21ef-4210-9d9a-592a9f5a663c-kube-api-access-92n7b\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.243308 4828 generic.go:334] "Generic (PLEG): container finished" podID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerID="c1b25c5fe97a3c4fdffd6899ab100868bf4eb5068ac24ce8187c20df96bb95f1" exitCode=0 Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.243376 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" event={"ID":"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2","Type":"ContainerDied","Data":"c1b25c5fe97a3c4fdffd6899ab100868bf4eb5068ac24ce8187c20df96bb95f1"} Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.245213 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" event={"ID":"23a148e4-21ef-4210-9d9a-592a9f5a663c","Type":"ContainerDied","Data":"e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3"} Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.245310 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81b37b1ac9ea1d0c2e87afb654a2e8348f7026d89fb8c9c57f5ab74f2baf9f3" Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.245233 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56" Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.247022 4828 generic.go:334] "Generic (PLEG): container finished" podID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerID="1729172a5c08205b1170926ad95bffe3ae66112045997fa932d5db4df1e2ddeb" exitCode=0 Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.247058 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerDied","Data":"1729172a5c08205b1170926ad95bffe3ae66112045997fa932d5db4df1e2ddeb"} Nov 29 07:15:03 crc kubenswrapper[4828]: I1129 07:15:03.247085 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerStarted","Data":"7fe6316e524dbf363b4c8a51e1b78ca98c1708205c2f33bd35c7cba8909c4131"} Nov 29 07:15:04 crc kubenswrapper[4828]: I1129 07:15:04.256529 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerStarted","Data":"7096c89ace5beeeb28fcb432193d5abcaf0847a89ea32740da59035fa7847ebe"} Nov 29 07:15:04 crc kubenswrapper[4828]: I1129 07:15:04.259767 4828 generic.go:334] "Generic (PLEG): container finished" podID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerID="253d0b5f8e8abcf52c0e0556fff60c995b2921f6df9d9fb714eb18c6785e7430" exitCode=0 Nov 29 07:15:04 crc kubenswrapper[4828]: I1129 07:15:04.259830 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" event={"ID":"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2","Type":"ContainerDied","Data":"253d0b5f8e8abcf52c0e0556fff60c995b2921f6df9d9fb714eb18c6785e7430"} Nov 29 07:15:05 crc kubenswrapper[4828]: I1129 07:15:05.993279 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.126864 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6\") pod \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.127035 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util\") pod \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.127060 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle\") pod \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\" (UID: \"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2\") " Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.127787 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle" (OuterVolumeSpecName: "bundle") pod "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" (UID: "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.138746 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6" (OuterVolumeSpecName: "kube-api-access-hxtr6") pod "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" (UID: "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2"). InnerVolumeSpecName "kube-api-access-hxtr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.139324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util" (OuterVolumeSpecName: "util") pod "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" (UID: "d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.228686 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.228738 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.228751 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2-kube-api-access-hxtr6\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.276663 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.276561 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn" event={"ID":"d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2","Type":"ContainerDied","Data":"b4dcc49b5d12d3898c282b32ba8028347608304c5a40bffa2dbbdc0cc776d639"} Nov 29 07:15:06 crc kubenswrapper[4828]: I1129 07:15:06.276815 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4dcc49b5d12d3898c282b32ba8028347608304c5a40bffa2dbbdc0cc776d639" Nov 29 07:15:07 crc kubenswrapper[4828]: I1129 07:15:07.284950 4828 generic.go:334] "Generic (PLEG): container finished" podID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerID="7096c89ace5beeeb28fcb432193d5abcaf0847a89ea32740da59035fa7847ebe" exitCode=0 Nov 29 07:15:07 crc kubenswrapper[4828]: I1129 07:15:07.285051 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerDied","Data":"7096c89ace5beeeb28fcb432193d5abcaf0847a89ea32740da59035fa7847ebe"} Nov 29 07:15:09 crc kubenswrapper[4828]: I1129 07:15:09.299237 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerStarted","Data":"7d4f013069651355b8fb2ccb8f5a76e66ef7073bac5b130d45670dc0a366fa08"} Nov 29 07:15:09 crc kubenswrapper[4828]: I1129 07:15:09.320044 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lzqbk" podStartSLOduration=3.491278693 podStartE2EDuration="8.31999322s" podCreationTimestamp="2025-11-29 07:15:01 +0000 UTC" firstStartedPulling="2025-11-29 07:15:03.248353995 +0000 UTC m=+842.870430053" lastFinishedPulling="2025-11-29 07:15:08.077068522 +0000 UTC m=+847.699144580" observedRunningTime="2025-11-29 07:15:09.315052745 +0000 UTC m=+848.937128803" watchObservedRunningTime="2025-11-29 07:15:09.31999322 +0000 UTC m=+848.942069278" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.430922 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc"] Nov 29 07:15:10 crc kubenswrapper[4828]: E1129 07:15:10.431524 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a148e4-21ef-4210-9d9a-592a9f5a663c" containerName="collect-profiles" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431545 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a148e4-21ef-4210-9d9a-592a9f5a663c" containerName="collect-profiles" Nov 29 07:15:10 crc kubenswrapper[4828]: E1129 07:15:10.431567 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="util" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431575 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="util" Nov 29 07:15:10 crc kubenswrapper[4828]: E1129 07:15:10.431587 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="extract" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431595 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="extract" Nov 29 07:15:10 crc kubenswrapper[4828]: E1129 07:15:10.431609 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="pull" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431617 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="pull" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431749 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a148e4-21ef-4210-9d9a-592a9f5a663c" containerName="collect-profiles" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.431771 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2" containerName="extract" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.432281 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.435325 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vvzcg" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.435633 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.435647 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.445431 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc"] Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.583138 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrcb\" (UniqueName: \"kubernetes.io/projected/2b65ae71-1e9f-439c-9c5c-8980083ea513-kube-api-access-pcrcb\") pod \"nmstate-operator-5b5b58f5c8-ghxdc\" (UID: \"2b65ae71-1e9f-439c-9c5c-8980083ea513\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.684375 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrcb\" (UniqueName: \"kubernetes.io/projected/2b65ae71-1e9f-439c-9c5c-8980083ea513-kube-api-access-pcrcb\") pod \"nmstate-operator-5b5b58f5c8-ghxdc\" (UID: \"2b65ae71-1e9f-439c-9c5c-8980083ea513\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.712691 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrcb\" (UniqueName: \"kubernetes.io/projected/2b65ae71-1e9f-439c-9c5c-8980083ea513-kube-api-access-pcrcb\") pod \"nmstate-operator-5b5b58f5c8-ghxdc\" (UID: \"2b65ae71-1e9f-439c-9c5c-8980083ea513\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" Nov 29 07:15:10 crc kubenswrapper[4828]: I1129 07:15:10.749569 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.181196 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc"] Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.327181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" event={"ID":"2b65ae71-1e9f-439c-9c5c-8980083ea513","Type":"ContainerStarted","Data":"a879e6d29cf02bea6fd27913764f5217fd8566683c13aff5b83a5af5e63f131e"} Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.487315 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.487386 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.487435 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.488051 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:15:11 crc kubenswrapper[4828]: I1129 07:15:11.488126 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b" gracePeriod=600 Nov 29 07:15:11 crc kubenswrapper[4828]: E1129 07:15:11.602473 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce72f1df_15a3_475b_918b_9076a0d9c29c.slice/crio-f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.255879 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.256309 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.335615 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b" exitCode=0 Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.335683 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b"} Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.336057 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36"} Nov 29 07:15:12 crc kubenswrapper[4828]: I1129 07:15:12.336079 4828 scope.go:117] "RemoveContainer" containerID="81b06e8db4c29a460c072dc8a796a4c319640158b71110f5d37e4548c1dd9feb" Nov 29 07:15:13 crc kubenswrapper[4828]: I1129 07:15:13.303546 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzqbk" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="registry-server" probeResult="failure" output=< Nov 29 07:15:13 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:15:13 crc kubenswrapper[4828]: > Nov 29 07:15:17 crc kubenswrapper[4828]: I1129 07:15:17.365704 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" event={"ID":"2b65ae71-1e9f-439c-9c5c-8980083ea513","Type":"ContainerStarted","Data":"2b893e690face6d67e45986a13d56e1e0f2278aec3b3cccde0c37299f96ba174"} Nov 29 07:15:17 crc kubenswrapper[4828]: I1129 07:15:17.387610 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ghxdc" podStartSLOduration=1.4297364670000001 podStartE2EDuration="7.387588261s" podCreationTimestamp="2025-11-29 07:15:10 +0000 UTC" firstStartedPulling="2025-11-29 07:15:11.189459233 +0000 UTC m=+850.811535291" lastFinishedPulling="2025-11-29 07:15:17.147311027 +0000 UTC m=+856.769387085" observedRunningTime="2025-11-29 07:15:17.385015486 +0000 UTC m=+857.007091544" watchObservedRunningTime="2025-11-29 07:15:17.387588261 +0000 UTC m=+857.009664329" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.080999 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.082459 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.089161 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-82j4n" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.094807 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.103322 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.104400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.107323 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.140923 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zcc72"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.142411 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.148690 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.216568 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.216621 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f49t6\" (UniqueName: \"kubernetes.io/projected/b6b458f9-e87c-4841-bb7e-a62e1a283434-kube-api-access-f49t6\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.216647 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8j6\" (UniqueName: \"kubernetes.io/projected/e0556ed8-0627-45a6-9c96-3deae542a208-kube-api-access-xz8j6\") pod \"nmstate-metrics-7f946cbc9-tzzc4\" (UID: \"e0556ed8-0627-45a6-9c96-3deae542a208\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.224941 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.225915 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.230817 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.230913 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.231020 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5kdxv" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.268767 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.317716 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f49t6\" (UniqueName: \"kubernetes.io/projected/b6b458f9-e87c-4841-bb7e-a62e1a283434-kube-api-access-f49t6\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319038 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz8j6\" (UniqueName: \"kubernetes.io/projected/e0556ed8-0627-45a6-9c96-3deae542a208-kube-api-access-xz8j6\") pod \"nmstate-metrics-7f946cbc9-tzzc4\" (UID: \"e0556ed8-0627-45a6-9c96-3deae542a208\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319555 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgv7\" (UniqueName: \"kubernetes.io/projected/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-kube-api-access-xlgv7\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319664 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319772 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2d27739-6c6a-49c9-8032-4b206f20007e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319876 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-dbus-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319926 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-nmstate-lock\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.319958 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.320022 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-ovs-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.320066 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvdh\" (UniqueName: \"kubernetes.io/projected/e2d27739-6c6a-49c9-8032-4b206f20007e-kube-api-access-mfvdh\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: E1129 07:15:20.320075 4828 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 29 07:15:20 crc kubenswrapper[4828]: E1129 07:15:20.320164 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair podName:b6b458f9-e87c-4841-bb7e-a62e1a283434 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:20.820118869 +0000 UTC m=+860.442194957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-kr5n6" (UID: "b6b458f9-e87c-4841-bb7e-a62e1a283434") : secret "openshift-nmstate-webhook" not found Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.340884 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f49t6\" (UniqueName: \"kubernetes.io/projected/b6b458f9-e87c-4841-bb7e-a62e1a283434-kube-api-access-f49t6\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.341539 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz8j6\" (UniqueName: \"kubernetes.io/projected/e0556ed8-0627-45a6-9c96-3deae542a208-kube-api-access-xz8j6\") pod \"nmstate-metrics-7f946cbc9-tzzc4\" (UID: \"e0556ed8-0627-45a6-9c96-3deae542a208\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.407452 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422113 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2d27739-6c6a-49c9-8032-4b206f20007e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422355 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-dbus-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422398 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-nmstate-lock\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422456 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-ovs-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422494 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvdh\" (UniqueName: \"kubernetes.io/projected/e2d27739-6c6a-49c9-8032-4b206f20007e-kube-api-access-mfvdh\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422545 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgv7\" (UniqueName: \"kubernetes.io/projected/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-kube-api-access-xlgv7\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422599 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-nmstate-lock\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422646 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.422602 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-ovs-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: E1129 07:15:20.422879 4828 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 29 07:15:20 crc kubenswrapper[4828]: E1129 07:15:20.422991 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert podName:e2d27739-6c6a-49c9-8032-4b206f20007e nodeName:}" failed. No retries permitted until 2025-11-29 07:15:20.922960752 +0000 UTC m=+860.545036990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-srhqh" (UID: "e2d27739-6c6a-49c9-8032-4b206f20007e") : secret "plugin-serving-cert" not found Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.423291 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-dbus-socket\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.423482 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2d27739-6c6a-49c9-8032-4b206f20007e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.452975 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6db687dd84-db8h4"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.453431 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvdh\" (UniqueName: \"kubernetes.io/projected/e2d27739-6c6a-49c9-8032-4b206f20007e-kube-api-access-mfvdh\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.454806 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.463633 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgv7\" (UniqueName: \"kubernetes.io/projected/9cff6462-3fcb-4ea2-8d92-6ff9c616313b-kube-api-access-xlgv7\") pod \"nmstate-handler-zcc72\" (UID: \"9cff6462-3fcb-4ea2-8d92-6ff9c616313b\") " pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.471178 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6db687dd84-db8h4"] Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.471502 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.625532 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7vm\" (UniqueName: \"kubernetes.io/projected/497026a9-6d23-4c23-901a-dd8de908d533-kube-api-access-bj7vm\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.625948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-service-ca\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.625982 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-console-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.626045 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-oauth-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.626120 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-trusted-ca-bundle\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.626151 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-oauth-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.626178 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728138 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-service-ca\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-console-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728294 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-oauth-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728405 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-trusted-ca-bundle\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728672 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-oauth-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728703 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.728745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7vm\" (UniqueName: \"kubernetes.io/projected/497026a9-6d23-4c23-901a-dd8de908d533-kube-api-access-bj7vm\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.729314 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-console-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.729382 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-service-ca\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.730209 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-oauth-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.730767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497026a9-6d23-4c23-901a-dd8de908d533-trusted-ca-bundle\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.734660 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-oauth-config\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.735575 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/497026a9-6d23-4c23-901a-dd8de908d533-console-serving-cert\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.745691 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7vm\" (UniqueName: \"kubernetes.io/projected/497026a9-6d23-4c23-901a-dd8de908d533-kube-api-access-bj7vm\") pod \"console-6db687dd84-db8h4\" (UID: \"497026a9-6d23-4c23-901a-dd8de908d533\") " pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.829460 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.832674 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6b458f9-e87c-4841-bb7e-a62e1a283434-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-kr5n6\" (UID: \"b6b458f9-e87c-4841-bb7e-a62e1a283434\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.832911 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.837357 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4"] Nov 29 07:15:20 crc kubenswrapper[4828]: W1129 07:15:20.850083 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0556ed8_0627_45a6_9c96_3deae542a208.slice/crio-1f0ce9f82c83a7244635ad7fb3662fbc8f0606f5ce7ff52ed0972456f68b9431 WatchSource:0}: Error finding container 1f0ce9f82c83a7244635ad7fb3662fbc8f0606f5ce7ff52ed0972456f68b9431: Status 404 returned error can't find the container with id 1f0ce9f82c83a7244635ad7fb3662fbc8f0606f5ce7ff52ed0972456f68b9431 Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.936262 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:20 crc kubenswrapper[4828]: I1129 07:15:20.940692 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2d27739-6c6a-49c9-8032-4b206f20007e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-srhqh\" (UID: \"e2d27739-6c6a-49c9-8032-4b206f20007e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.035160 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.146921 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.225893 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6db687dd84-db8h4"] Nov 29 07:15:21 crc kubenswrapper[4828]: W1129 07:15:21.257715 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod497026a9_6d23_4c23_901a_dd8de908d533.slice/crio-9c183f5dba7cc8c2309c2519602fb87dff73b48d19e141480f6862b858deebac WatchSource:0}: Error finding container 9c183f5dba7cc8c2309c2519602fb87dff73b48d19e141480f6862b858deebac: Status 404 returned error can't find the container with id 9c183f5dba7cc8c2309c2519602fb87dff73b48d19e141480f6862b858deebac Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.354110 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6"] Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.391337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" event={"ID":"b6b458f9-e87c-4841-bb7e-a62e1a283434","Type":"ContainerStarted","Data":"4629bd1cc5d56b4fef4fed75977f05a397fc730bdb09b9a024bd3aa5c0593b28"} Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.392490 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6db687dd84-db8h4" event={"ID":"497026a9-6d23-4c23-901a-dd8de908d533","Type":"ContainerStarted","Data":"9c183f5dba7cc8c2309c2519602fb87dff73b48d19e141480f6862b858deebac"} Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.393337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zcc72" event={"ID":"9cff6462-3fcb-4ea2-8d92-6ff9c616313b","Type":"ContainerStarted","Data":"0dce8d6aa26b1bf68a02eed02b258189762695b42a13bdff212a432c35033d02"} Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.396940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" event={"ID":"e0556ed8-0627-45a6-9c96-3deae542a208","Type":"ContainerStarted","Data":"1f0ce9f82c83a7244635ad7fb3662fbc8f0606f5ce7ff52ed0972456f68b9431"} Nov 29 07:15:21 crc kubenswrapper[4828]: I1129 07:15:21.424508 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh"] Nov 29 07:15:21 crc kubenswrapper[4828]: W1129 07:15:21.428249 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d27739_6c6a_49c9_8032_4b206f20007e.slice/crio-b9bdf6ed39dd8d52a0a2a097c19dbaf3f49970686596c465b18c8bcd8504b84f WatchSource:0}: Error finding container b9bdf6ed39dd8d52a0a2a097c19dbaf3f49970686596c465b18c8bcd8504b84f: Status 404 returned error can't find the container with id b9bdf6ed39dd8d52a0a2a097c19dbaf3f49970686596c465b18c8bcd8504b84f Nov 29 07:15:22 crc kubenswrapper[4828]: I1129 07:15:22.300383 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:22 crc kubenswrapper[4828]: I1129 07:15:22.342998 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:22 crc kubenswrapper[4828]: I1129 07:15:22.404299 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" event={"ID":"e2d27739-6c6a-49c9-8032-4b206f20007e","Type":"ContainerStarted","Data":"b9bdf6ed39dd8d52a0a2a097c19dbaf3f49970686596c465b18c8bcd8504b84f"} Nov 29 07:15:22 crc kubenswrapper[4828]: I1129 07:15:22.405777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6db687dd84-db8h4" event={"ID":"497026a9-6d23-4c23-901a-dd8de908d533","Type":"ContainerStarted","Data":"7025592bc539aa9e1c828412fe02005dee6e7c7d5ca396dde0ffa687e243fff5"} Nov 29 07:15:22 crc kubenswrapper[4828]: I1129 07:15:22.532609 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:23 crc kubenswrapper[4828]: I1129 07:15:23.414959 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lzqbk" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="registry-server" containerID="cri-o://7d4f013069651355b8fb2ccb8f5a76e66ef7073bac5b130d45670dc0a366fa08" gracePeriod=2 Nov 29 07:15:24 crc kubenswrapper[4828]: I1129 07:15:24.482885 4828 generic.go:334] "Generic (PLEG): container finished" podID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerID="7d4f013069651355b8fb2ccb8f5a76e66ef7073bac5b130d45670dc0a366fa08" exitCode=0 Nov 29 07:15:24 crc kubenswrapper[4828]: I1129 07:15:24.482974 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerDied","Data":"7d4f013069651355b8fb2ccb8f5a76e66ef7073bac5b130d45670dc0a366fa08"} Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.352151 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.370686 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6db687dd84-db8h4" podStartSLOduration=5.370644491 podStartE2EDuration="5.370644491s" podCreationTimestamp="2025-11-29 07:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:15:23.441023125 +0000 UTC m=+863.063099193" watchObservedRunningTime="2025-11-29 07:15:25.370644491 +0000 UTC m=+864.992720549" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.492149 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzqbk" event={"ID":"e2bd5073-3724-4d31-a7bf-dc2f5faa090d","Type":"ContainerDied","Data":"7fe6316e524dbf363b4c8a51e1b78ca98c1708205c2f33bd35c7cba8909c4131"} Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.492206 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzqbk" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.492248 4828 scope.go:117] "RemoveContainer" containerID="7d4f013069651355b8fb2ccb8f5a76e66ef7073bac5b130d45670dc0a366fa08" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.517203 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities\") pod \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.517408 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content\") pod \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.517498 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlckr\" (UniqueName: \"kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr\") pod \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\" (UID: \"e2bd5073-3724-4d31-a7bf-dc2f5faa090d\") " Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.518419 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities" (OuterVolumeSpecName: "utilities") pod "e2bd5073-3724-4d31-a7bf-dc2f5faa090d" (UID: "e2bd5073-3724-4d31-a7bf-dc2f5faa090d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.524706 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr" (OuterVolumeSpecName: "kube-api-access-qlckr") pod "e2bd5073-3724-4d31-a7bf-dc2f5faa090d" (UID: "e2bd5073-3724-4d31-a7bf-dc2f5faa090d"). InnerVolumeSpecName "kube-api-access-qlckr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.525233 4828 scope.go:117] "RemoveContainer" containerID="7096c89ace5beeeb28fcb432193d5abcaf0847a89ea32740da59035fa7847ebe" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.550753 4828 scope.go:117] "RemoveContainer" containerID="1729172a5c08205b1170926ad95bffe3ae66112045997fa932d5db4df1e2ddeb" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.618936 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlckr\" (UniqueName: \"kubernetes.io/projected/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-kube-api-access-qlckr\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.618967 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.628183 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2bd5073-3724-4d31-a7bf-dc2f5faa090d" (UID: "e2bd5073-3724-4d31-a7bf-dc2f5faa090d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.720518 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5073-3724-4d31-a7bf-dc2f5faa090d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.818627 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:25 crc kubenswrapper[4828]: I1129 07:15:25.824093 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzqbk"] Nov 29 07:15:27 crc kubenswrapper[4828]: I1129 07:15:27.420212 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" path="/var/lib/kubelet/pods/e2bd5073-3724-4d31-a7bf-dc2f5faa090d/volumes" Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.517442 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zcc72" event={"ID":"9cff6462-3fcb-4ea2-8d92-6ff9c616313b","Type":"ContainerStarted","Data":"60df282afefaa493039961e67c174ede22d37514ac9c9d5eb8fcb00b9754e5da"} Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.518033 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.519531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" event={"ID":"e0556ed8-0627-45a6-9c96-3deae542a208","Type":"ContainerStarted","Data":"8aeec141a97b1e21d0395dfe7b239d709caeef8dc5aee89505890d2b2ffd13d7"} Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.521852 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" event={"ID":"b6b458f9-e87c-4841-bb7e-a62e1a283434","Type":"ContainerStarted","Data":"f9f424f80484cf600ea7d2baf2a09e8cb1b91486b8a8fbfb70c34554a05911c7"} Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.522023 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.523889 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" event={"ID":"e2d27739-6c6a-49c9-8032-4b206f20007e","Type":"ContainerStarted","Data":"679047465e93c1a856b1d52d9dd0497569d9dc0e3e719e9d07f48feeed37f276"} Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.589168 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zcc72" podStartSLOduration=1.483432213 podStartE2EDuration="8.589138109s" podCreationTimestamp="2025-11-29 07:15:20 +0000 UTC" firstStartedPulling="2025-11-29 07:15:20.503003019 +0000 UTC m=+860.125079077" lastFinishedPulling="2025-11-29 07:15:27.608708915 +0000 UTC m=+867.230784973" observedRunningTime="2025-11-29 07:15:28.580990242 +0000 UTC m=+868.203066320" watchObservedRunningTime="2025-11-29 07:15:28.589138109 +0000 UTC m=+868.211214177" Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.605195 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" podStartSLOduration=2.220926145 podStartE2EDuration="8.605174695s" podCreationTimestamp="2025-11-29 07:15:20 +0000 UTC" firstStartedPulling="2025-11-29 07:15:21.369904628 +0000 UTC m=+860.991980686" lastFinishedPulling="2025-11-29 07:15:27.754153168 +0000 UTC m=+867.376229236" observedRunningTime="2025-11-29 07:15:28.604435816 +0000 UTC m=+868.226511894" watchObservedRunningTime="2025-11-29 07:15:28.605174695 +0000 UTC m=+868.227250753" Nov 29 07:15:28 crc kubenswrapper[4828]: I1129 07:15:28.652815 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-srhqh" podStartSLOduration=2.328617522 podStartE2EDuration="8.65279081s" podCreationTimestamp="2025-11-29 07:15:20 +0000 UTC" firstStartedPulling="2025-11-29 07:15:21.431660642 +0000 UTC m=+861.053736700" lastFinishedPulling="2025-11-29 07:15:27.75583391 +0000 UTC m=+867.377909988" observedRunningTime="2025-11-29 07:15:28.62039395 +0000 UTC m=+868.242470008" watchObservedRunningTime="2025-11-29 07:15:28.65279081 +0000 UTC m=+868.274866868" Nov 29 07:15:30 crc kubenswrapper[4828]: I1129 07:15:30.834368 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:30 crc kubenswrapper[4828]: I1129 07:15:30.835134 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:30 crc kubenswrapper[4828]: I1129 07:15:30.840103 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:31 crc kubenswrapper[4828]: I1129 07:15:31.545022 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" event={"ID":"e0556ed8-0627-45a6-9c96-3deae542a208","Type":"ContainerStarted","Data":"1367836db7a07001b8f75df22eb33d475f5df31e28db85d9dd9b8843e00d1bad"} Nov 29 07:15:31 crc kubenswrapper[4828]: I1129 07:15:31.549233 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6db687dd84-db8h4" Nov 29 07:15:31 crc kubenswrapper[4828]: I1129 07:15:31.571642 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tzzc4" podStartSLOduration=1.8496074230000001 podStartE2EDuration="11.57159945s" podCreationTimestamp="2025-11-29 07:15:20 +0000 UTC" firstStartedPulling="2025-11-29 07:15:20.861208368 +0000 UTC m=+860.483284426" lastFinishedPulling="2025-11-29 07:15:30.583200395 +0000 UTC m=+870.205276453" observedRunningTime="2025-11-29 07:15:31.566779658 +0000 UTC m=+871.188855726" watchObservedRunningTime="2025-11-29 07:15:31.57159945 +0000 UTC m=+871.193675508" Nov 29 07:15:31 crc kubenswrapper[4828]: I1129 07:15:31.626846 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:15:35 crc kubenswrapper[4828]: I1129 07:15:35.495030 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zcc72" Nov 29 07:15:41 crc kubenswrapper[4828]: I1129 07:15:41.041245 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-kr5n6" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.224873 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9"] Nov 29 07:15:54 crc kubenswrapper[4828]: E1129 07:15:54.225803 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="registry-server" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.225835 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="registry-server" Nov 29 07:15:54 crc kubenswrapper[4828]: E1129 07:15:54.225857 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="extract-utilities" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.225866 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="extract-utilities" Nov 29 07:15:54 crc kubenswrapper[4828]: E1129 07:15:54.225885 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="extract-content" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.225894 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="extract-content" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.226048 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bd5073-3724-4d31-a7bf-dc2f5faa090d" containerName="registry-server" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.226917 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.229691 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.236729 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9"] Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.345351 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.345406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssn7j\" (UniqueName: \"kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.345424 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.446638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.446704 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssn7j\" (UniqueName: \"kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.446742 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.447406 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.447678 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.469001 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssn7j\" (UniqueName: \"kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.549929 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:15:54 crc kubenswrapper[4828]: I1129 07:15:54.755380 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9"] Nov 29 07:15:55 crc kubenswrapper[4828]: I1129 07:15:55.707200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerStarted","Data":"d8d18ae7918c02f7976c776ea262c3b4fdb3e6e782bbb12cd4296028dcb71d91"} Nov 29 07:15:55 crc kubenswrapper[4828]: I1129 07:15:55.707595 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerStarted","Data":"21f3813daff0e4e0469bec479cf8da729386cc022f2d2357d2c28569d81dc498"} Nov 29 07:15:56 crc kubenswrapper[4828]: I1129 07:15:56.665148 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9vbf7" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" containerID="cri-o://0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993" gracePeriod=15 Nov 29 07:15:56 crc kubenswrapper[4828]: I1129 07:15:56.715664 4828 generic.go:334] "Generic (PLEG): container finished" podID="f7790c72-dd1d-405c-8360-a63989834be8" containerID="d8d18ae7918c02f7976c776ea262c3b4fdb3e6e782bbb12cd4296028dcb71d91" exitCode=0 Nov 29 07:15:56 crc kubenswrapper[4828]: I1129 07:15:56.715719 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerDied","Data":"d8d18ae7918c02f7976c776ea262c3b4fdb3e6e782bbb12cd4296028dcb71d91"} Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.599478 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9vbf7_78cb844a-3bae-4cd2-9fb8-63f20fec1755/console/0.log" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.599814 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692785 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692863 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692900 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692945 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pphlr\" (UniqueName: \"kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692978 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.692998 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.693028 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert\") pod \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\" (UID: \"78cb844a-3bae-4cd2-9fb8-63f20fec1755\") " Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.695197 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.695292 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca" (OuterVolumeSpecName: "service-ca") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.695803 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.695858 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config" (OuterVolumeSpecName: "console-config") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.701350 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.701421 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr" (OuterVolumeSpecName: "kube-api-access-pphlr") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "kube-api-access-pphlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.705726 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "78cb844a-3bae-4cd2-9fb8-63f20fec1755" (UID: "78cb844a-3bae-4cd2-9fb8-63f20fec1755"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.724976 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9vbf7_78cb844a-3bae-4cd2-9fb8-63f20fec1755/console/0.log" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.725037 4828 generic.go:334] "Generic (PLEG): container finished" podID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerID="0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993" exitCode=2 Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.725072 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vbf7" event={"ID":"78cb844a-3bae-4cd2-9fb8-63f20fec1755","Type":"ContainerDied","Data":"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993"} Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.725103 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vbf7" event={"ID":"78cb844a-3bae-4cd2-9fb8-63f20fec1755","Type":"ContainerDied","Data":"f3bc91b6d2235fe32c1d2a278557c8b143268241357f0526b8de33038381972a"} Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.725122 4828 scope.go:117] "RemoveContainer" containerID="0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.725123 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vbf7" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.759770 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.760445 4828 scope.go:117] "RemoveContainer" containerID="0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993" Nov 29 07:15:57 crc kubenswrapper[4828]: E1129 07:15:57.762029 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993\": container with ID starting with 0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993 not found: ID does not exist" containerID="0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.762412 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993"} err="failed to get container status \"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993\": rpc error: code = NotFound desc = could not find container \"0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993\": container with ID starting with 0bb265ca626867799823d2fe9e6184ba39d765ab2a6109e5c6dae81813541993 not found: ID does not exist" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.764907 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9vbf7"] Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.794952 4828 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.794990 4828 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.795011 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.795026 4828 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.795117 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pphlr\" (UniqueName: \"kubernetes.io/projected/78cb844a-3bae-4cd2-9fb8-63f20fec1755-kube-api-access-pphlr\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.795138 4828 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/78cb844a-3bae-4cd2-9fb8-63f20fec1755-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:57 crc kubenswrapper[4828]: I1129 07:15:57.795150 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78cb844a-3bae-4cd2-9fb8-63f20fec1755-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:58 crc kubenswrapper[4828]: I1129 07:15:58.455611 4828 patch_prober.go:28] interesting pod/console-f9d7485db-9vbf7 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:15:58 crc kubenswrapper[4828]: I1129 07:15:58.455956 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-9vbf7" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:15:59 crc kubenswrapper[4828]: I1129 07:15:59.418230 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" path="/var/lib/kubelet/pods/78cb844a-3bae-4cd2-9fb8-63f20fec1755/volumes" Nov 29 07:16:00 crc kubenswrapper[4828]: I1129 07:16:00.754086 4828 generic.go:334] "Generic (PLEG): container finished" podID="f7790c72-dd1d-405c-8360-a63989834be8" containerID="8fc5e8b8bda60b5b9818fc47c139ba00e95219e50e9a0d40effd942d730bd112" exitCode=0 Nov 29 07:16:00 crc kubenswrapper[4828]: I1129 07:16:00.754460 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerDied","Data":"8fc5e8b8bda60b5b9818fc47c139ba00e95219e50e9a0d40effd942d730bd112"} Nov 29 07:16:02 crc kubenswrapper[4828]: E1129 07:16:02.332504 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Nov 29 07:16:02 crc kubenswrapper[4828]: I1129 07:16:02.769379 4828 generic.go:334] "Generic (PLEG): container finished" podID="f7790c72-dd1d-405c-8360-a63989834be8" containerID="96e241bd2a4dce1ff60a548c09357a660344b885989c80a6a01e39fc4d8fddaa" exitCode=0 Nov 29 07:16:02 crc kubenswrapper[4828]: I1129 07:16:02.769425 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerDied","Data":"96e241bd2a4dce1ff60a548c09357a660344b885989c80a6a01e39fc4d8fddaa"} Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.017915 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.110019 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util\") pod \"f7790c72-dd1d-405c-8360-a63989834be8\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.110094 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssn7j\" (UniqueName: \"kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j\") pod \"f7790c72-dd1d-405c-8360-a63989834be8\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.110144 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle\") pod \"f7790c72-dd1d-405c-8360-a63989834be8\" (UID: \"f7790c72-dd1d-405c-8360-a63989834be8\") " Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.111529 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle" (OuterVolumeSpecName: "bundle") pod "f7790c72-dd1d-405c-8360-a63989834be8" (UID: "f7790c72-dd1d-405c-8360-a63989834be8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.119003 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j" (OuterVolumeSpecName: "kube-api-access-ssn7j") pod "f7790c72-dd1d-405c-8360-a63989834be8" (UID: "f7790c72-dd1d-405c-8360-a63989834be8"). InnerVolumeSpecName "kube-api-access-ssn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.124377 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util" (OuterVolumeSpecName: "util") pod "f7790c72-dd1d-405c-8360-a63989834be8" (UID: "f7790c72-dd1d-405c-8360-a63989834be8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.211499 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.211572 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssn7j\" (UniqueName: \"kubernetes.io/projected/f7790c72-dd1d-405c-8360-a63989834be8-kube-api-access-ssn7j\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.211587 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7790c72-dd1d-405c-8360-a63989834be8-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.787064 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" event={"ID":"f7790c72-dd1d-405c-8360-a63989834be8","Type":"ContainerDied","Data":"21f3813daff0e4e0469bec479cf8da729386cc022f2d2357d2c28569d81dc498"} Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.787619 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f3813daff0e4e0469bec479cf8da729386cc022f2d2357d2c28569d81dc498" Nov 29 07:16:04 crc kubenswrapper[4828]: I1129 07:16:04.787098 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.419704 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw"] Nov 29 07:16:17 crc kubenswrapper[4828]: E1129 07:16:17.420462 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420476 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" Nov 29 07:16:17 crc kubenswrapper[4828]: E1129 07:16:17.420485 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="pull" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420491 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="pull" Nov 29 07:16:17 crc kubenswrapper[4828]: E1129 07:16:17.420510 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="extract" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420516 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="extract" Nov 29 07:16:17 crc kubenswrapper[4828]: E1129 07:16:17.420528 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="util" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420533 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="util" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420624 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7790c72-dd1d-405c-8360-a63989834be8" containerName="extract" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.420636 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cb844a-3bae-4cd2-9fb8-63f20fec1755" containerName="console" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.421035 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.435195 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.435245 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.435517 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.435598 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sm8jp" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.436118 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.455458 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw"] Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.487224 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfcv\" (UniqueName: \"kubernetes.io/projected/7dc202de-98db-4521-9ab3-a67ce9dff293-kube-api-access-5zfcv\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.487361 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-webhook-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.487393 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-apiservice-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.589065 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zfcv\" (UniqueName: \"kubernetes.io/projected/7dc202de-98db-4521-9ab3-a67ce9dff293-kube-api-access-5zfcv\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.589142 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-webhook-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.589163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-apiservice-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.595775 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-webhook-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.597919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7dc202de-98db-4521-9ab3-a67ce9dff293-apiservice-cert\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.629076 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zfcv\" (UniqueName: \"kubernetes.io/projected/7dc202de-98db-4521-9ab3-a67ce9dff293-kube-api-access-5zfcv\") pod \"metallb-operator-controller-manager-64dc5dd5cf-sbhrw\" (UID: \"7dc202de-98db-4521-9ab3-a67ce9dff293\") " pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.706204 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2"] Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.706897 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.708583 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.708802 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sm8vj" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.708994 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.724209 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2"] Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.739335 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.795031 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-apiservice-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.795133 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-webhook-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.795160 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7c5\" (UniqueName: \"kubernetes.io/projected/4fc1cb75-193e-440a-a790-2fde8aa47103-kube-api-access-2w7c5\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.896102 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-apiservice-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.896175 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7c5\" (UniqueName: \"kubernetes.io/projected/4fc1cb75-193e-440a-a790-2fde8aa47103-kube-api-access-2w7c5\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.896198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-webhook-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.904496 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-webhook-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.907367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc1cb75-193e-440a-a790-2fde8aa47103-apiservice-cert\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:17 crc kubenswrapper[4828]: I1129 07:16:17.921173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7c5\" (UniqueName: \"kubernetes.io/projected/4fc1cb75-193e-440a-a790-2fde8aa47103-kube-api-access-2w7c5\") pod \"metallb-operator-webhook-server-56cbcf7d78-fnjp2\" (UID: \"4fc1cb75-193e-440a-a790-2fde8aa47103\") " pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:18 crc kubenswrapper[4828]: I1129 07:16:18.009147 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw"] Nov 29 07:16:18 crc kubenswrapper[4828]: W1129 07:16:18.019441 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc202de_98db_4521_9ab3_a67ce9dff293.slice/crio-631279212c860d424f236ec2f0a79f88ad6143f463b400eed1e97043beea9ee4 WatchSource:0}: Error finding container 631279212c860d424f236ec2f0a79f88ad6143f463b400eed1e97043beea9ee4: Status 404 returned error can't find the container with id 631279212c860d424f236ec2f0a79f88ad6143f463b400eed1e97043beea9ee4 Nov 29 07:16:18 crc kubenswrapper[4828]: I1129 07:16:18.069081 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:18 crc kubenswrapper[4828]: I1129 07:16:18.501400 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2"] Nov 29 07:16:18 crc kubenswrapper[4828]: W1129 07:16:18.514991 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fc1cb75_193e_440a_a790_2fde8aa47103.slice/crio-b6658af5cf8217dfc004eae8876cb0fe8b67b3b32aa05c7190f880f063510c3a WatchSource:0}: Error finding container b6658af5cf8217dfc004eae8876cb0fe8b67b3b32aa05c7190f880f063510c3a: Status 404 returned error can't find the container with id b6658af5cf8217dfc004eae8876cb0fe8b67b3b32aa05c7190f880f063510c3a Nov 29 07:16:18 crc kubenswrapper[4828]: I1129 07:16:18.873612 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" event={"ID":"7dc202de-98db-4521-9ab3-a67ce9dff293","Type":"ContainerStarted","Data":"631279212c860d424f236ec2f0a79f88ad6143f463b400eed1e97043beea9ee4"} Nov 29 07:16:18 crc kubenswrapper[4828]: I1129 07:16:18.874923 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" event={"ID":"4fc1cb75-193e-440a-a790-2fde8aa47103","Type":"ContainerStarted","Data":"b6658af5cf8217dfc004eae8876cb0fe8b67b3b32aa05c7190f880f063510c3a"} Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.920705 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" event={"ID":"7dc202de-98db-4521-9ab3-a67ce9dff293","Type":"ContainerStarted","Data":"7265a9116e3981d685fc6e84910824c863bdb9967fffc95dfa1f529894f433eb"} Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.921042 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.922788 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" event={"ID":"4fc1cb75-193e-440a-a790-2fde8aa47103","Type":"ContainerStarted","Data":"e742f0c71031620b9e53fee2f73661787ec61c4371f4866076793af826bdd1be"} Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.923208 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.960148 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" podStartSLOduration=1.8809712969999999 podStartE2EDuration="7.960118456s" podCreationTimestamp="2025-11-29 07:16:17 +0000 UTC" firstStartedPulling="2025-11-29 07:16:18.029477969 +0000 UTC m=+917.651554027" lastFinishedPulling="2025-11-29 07:16:24.108625128 +0000 UTC m=+923.730701186" observedRunningTime="2025-11-29 07:16:24.955785994 +0000 UTC m=+924.577862062" watchObservedRunningTime="2025-11-29 07:16:24.960118456 +0000 UTC m=+924.582194514" Nov 29 07:16:24 crc kubenswrapper[4828]: I1129 07:16:24.995679 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" podStartSLOduration=2.379930524 podStartE2EDuration="7.995656995s" podCreationTimestamp="2025-11-29 07:16:17 +0000 UTC" firstStartedPulling="2025-11-29 07:16:18.518219082 +0000 UTC m=+918.140295140" lastFinishedPulling="2025-11-29 07:16:24.133945553 +0000 UTC m=+923.756021611" observedRunningTime="2025-11-29 07:16:24.993588092 +0000 UTC m=+924.615664170" watchObservedRunningTime="2025-11-29 07:16:24.995656995 +0000 UTC m=+924.617733063" Nov 29 07:16:38 crc kubenswrapper[4828]: I1129 07:16:38.074360 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56cbcf7d78-fnjp2" Nov 29 07:16:57 crc kubenswrapper[4828]: I1129 07:16:57.742782 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64dc5dd5cf-sbhrw" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.530548 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-86b52"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.545435 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.548651 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.548876 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.549058 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n8xc7" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.565943 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.566814 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.576474 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.584116 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.660008 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kjddj"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.660952 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.664482 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.664531 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.664819 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6xqjc" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.665697 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670192 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670251 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670305 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrvq\" (UniqueName: \"kubernetes.io/projected/c4f218ed-01de-4cf0-a800-ca644528acc3-kube-api-access-lhrvq\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670345 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c683344b-cd77-447f-b375-c83eb16100b6-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670426 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-conf\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-reloader\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670534 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qd9r\" (UniqueName: \"kubernetes.io/projected/c683344b-cd77-447f-b375-c83eb16100b6-kube-api-access-8qd9r\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670578 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-startup\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.670789 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-sockets\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.683473 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-r78sm"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.684677 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.687395 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.698697 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-r78sm"] Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772133 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-sockets\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772192 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772232 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772277 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq76q\" (UniqueName: \"kubernetes.io/projected/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-kube-api-access-xq76q\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772302 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbbd\" (UniqueName: \"kubernetes.io/projected/343eaf08-7337-45bd-90e6-650984143598-kube-api-access-rmbbd\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772333 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrvq\" (UniqueName: \"kubernetes.io/projected/c4f218ed-01de-4cf0-a800-ca644528acc3-kube-api-access-lhrvq\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772359 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-cert\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772402 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-conf\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772423 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c683344b-cd77-447f-b375-c83eb16100b6-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772454 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-reloader\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772484 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qd9r\" (UniqueName: \"kubernetes.io/projected/c683344b-cd77-447f-b375-c83eb16100b6-kube-api-access-8qd9r\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772506 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-startup\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metrics-certs\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772583 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metallb-excludel2\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772615 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-metrics-certs\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.772640 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.773154 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-sockets\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: E1129 07:16:58.773341 4828 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 29 07:16:58 crc kubenswrapper[4828]: E1129 07:16:58.773466 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs podName:c4f218ed-01de-4cf0-a800-ca644528acc3 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:59.273406704 +0000 UTC m=+958.895482762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs") pod "frr-k8s-86b52" (UID: "c4f218ed-01de-4cf0-a800-ca644528acc3") : secret "frr-k8s-certs-secret" not found Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.774051 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.774644 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-conf\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.775485 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c4f218ed-01de-4cf0-a800-ca644528acc3-reloader\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.776973 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c4f218ed-01de-4cf0-a800-ca644528acc3-frr-startup\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.781330 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c683344b-cd77-447f-b375-c83eb16100b6-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.794106 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qd9r\" (UniqueName: \"kubernetes.io/projected/c683344b-cd77-447f-b375-c83eb16100b6-kube-api-access-8qd9r\") pod \"frr-k8s-webhook-server-7fcb986d4-klm74\" (UID: \"c683344b-cd77-447f-b375-c83eb16100b6\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.809999 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrvq\" (UniqueName: \"kubernetes.io/projected/c4f218ed-01de-4cf0-a800-ca644528acc3-kube-api-access-lhrvq\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.874165 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-cert\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.874517 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metrics-certs\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.874640 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metallb-excludel2\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.874746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-metrics-certs\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.874875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: E1129 07:16:58.874969 4828 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:16:58 crc kubenswrapper[4828]: E1129 07:16:58.875052 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist podName:48a10b21-758a-47e9-8a65-1b6c9b6ba62a nodeName:}" failed. No retries permitted until 2025-11-29 07:16:59.375029023 +0000 UTC m=+958.997105081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist") pod "speaker-kjddj" (UID: "48a10b21-758a-47e9-8a65-1b6c9b6ba62a") : secret "metallb-memberlist" not found Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.875185 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq76q\" (UniqueName: \"kubernetes.io/projected/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-kube-api-access-xq76q\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.875360 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbbd\" (UniqueName: \"kubernetes.io/projected/343eaf08-7337-45bd-90e6-650984143598-kube-api-access-rmbbd\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.875490 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metallb-excludel2\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.877976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-cert\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.878410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/343eaf08-7337-45bd-90e6-650984143598-metrics-certs\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.886935 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-metrics-certs\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.893223 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbbd\" (UniqueName: \"kubernetes.io/projected/343eaf08-7337-45bd-90e6-650984143598-kube-api-access-rmbbd\") pod \"controller-f8648f98b-r78sm\" (UID: \"343eaf08-7337-45bd-90e6-650984143598\") " pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.899163 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:16:58 crc kubenswrapper[4828]: I1129 07:16:58.902578 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq76q\" (UniqueName: \"kubernetes.io/projected/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-kube-api-access-xq76q\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.012078 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.137036 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74"] Nov 29 07:16:59 crc kubenswrapper[4828]: W1129 07:16:59.186009 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc683344b_cd77_447f_b375_c83eb16100b6.slice/crio-6c74138f80fe16be99f6d670846e6cd716d7a3a371dcde607fec48fa42b39596 WatchSource:0}: Error finding container 6c74138f80fe16be99f6d670846e6cd716d7a3a371dcde607fec48fa42b39596: Status 404 returned error can't find the container with id 6c74138f80fe16be99f6d670846e6cd716d7a3a371dcde607fec48fa42b39596 Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.206331 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" event={"ID":"c683344b-cd77-447f-b375-c83eb16100b6","Type":"ContainerStarted","Data":"6c74138f80fe16be99f6d670846e6cd716d7a3a371dcde607fec48fa42b39596"} Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.282152 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-r78sm"] Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.283296 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.290529 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4f218ed-01de-4cf0-a800-ca644528acc3-metrics-certs\") pod \"frr-k8s-86b52\" (UID: \"c4f218ed-01de-4cf0-a800-ca644528acc3\") " pod="metallb-system/frr-k8s-86b52" Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.385037 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:16:59 crc kubenswrapper[4828]: E1129 07:16:59.385223 4828 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:16:59 crc kubenswrapper[4828]: E1129 07:16:59.385326 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist podName:48a10b21-758a-47e9-8a65-1b6c9b6ba62a nodeName:}" failed. No retries permitted until 2025-11-29 07:17:00.385288882 +0000 UTC m=+960.007364940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist") pod "speaker-kjddj" (UID: "48a10b21-758a-47e9-8a65-1b6c9b6ba62a") : secret "metallb-memberlist" not found Nov 29 07:16:59 crc kubenswrapper[4828]: I1129 07:16:59.492686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-86b52" Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.214895 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"d78f8bb620b7f37ce91b73cfc2018898c1b61874d64239a1b6f59dc98a81cec9"} Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.216924 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-r78sm" event={"ID":"343eaf08-7337-45bd-90e6-650984143598","Type":"ContainerStarted","Data":"947794a88a868804db88f61a540d33a0c1dc8503da0bec39daa4210bb2e55cf8"} Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.217331 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-r78sm" event={"ID":"343eaf08-7337-45bd-90e6-650984143598","Type":"ContainerStarted","Data":"509bb4673ce44519483dc7c3d304cbf880c77f34dba82fb877bd1d7774a82fc0"} Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.217355 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.217370 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-r78sm" event={"ID":"343eaf08-7337-45bd-90e6-650984143598","Type":"ContainerStarted","Data":"f90a4360aa85c2c27a633eff4b36174351439fa1bd8a0df2eea3e3400359d5c6"} Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.398053 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.404198 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/48a10b21-758a-47e9-8a65-1b6c9b6ba62a-memberlist\") pod \"speaker-kjddj\" (UID: \"48a10b21-758a-47e9-8a65-1b6c9b6ba62a\") " pod="metallb-system/speaker-kjddj" Nov 29 07:17:00 crc kubenswrapper[4828]: I1129 07:17:00.481588 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kjddj" Nov 29 07:17:00 crc kubenswrapper[4828]: W1129 07:17:00.504332 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48a10b21_758a_47e9_8a65_1b6c9b6ba62a.slice/crio-772f57243e9191d474f61ca7717fa62c5099a05d30f180615cec254b180ebf55 WatchSource:0}: Error finding container 772f57243e9191d474f61ca7717fa62c5099a05d30f180615cec254b180ebf55: Status 404 returned error can't find the container with id 772f57243e9191d474f61ca7717fa62c5099a05d30f180615cec254b180ebf55 Nov 29 07:17:01 crc kubenswrapper[4828]: I1129 07:17:01.230405 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kjddj" event={"ID":"48a10b21-758a-47e9-8a65-1b6c9b6ba62a","Type":"ContainerStarted","Data":"7ede6bb74a979db9e89c02b53d2c337fca1fa4560e55fee80a7af5fb76f0f08d"} Nov 29 07:17:01 crc kubenswrapper[4828]: I1129 07:17:01.230498 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kjddj" event={"ID":"48a10b21-758a-47e9-8a65-1b6c9b6ba62a","Type":"ContainerStarted","Data":"772f57243e9191d474f61ca7717fa62c5099a05d30f180615cec254b180ebf55"} Nov 29 07:17:01 crc kubenswrapper[4828]: I1129 07:17:01.462494 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-r78sm" podStartSLOduration=3.462454836 podStartE2EDuration="3.462454836s" podCreationTimestamp="2025-11-29 07:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:00.238629437 +0000 UTC m=+959.860705495" watchObservedRunningTime="2025-11-29 07:17:01.462454836 +0000 UTC m=+961.084530894" Nov 29 07:17:02 crc kubenswrapper[4828]: I1129 07:17:02.251642 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kjddj" event={"ID":"48a10b21-758a-47e9-8a65-1b6c9b6ba62a","Type":"ContainerStarted","Data":"65a14cd6fae8e24df56950b1f70ce8f5837ebb81924da6e890e666bad22b7bf7"} Nov 29 07:17:02 crc kubenswrapper[4828]: I1129 07:17:02.251815 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kjddj" Nov 29 07:17:02 crc kubenswrapper[4828]: I1129 07:17:02.273418 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kjddj" podStartSLOduration=4.273396825 podStartE2EDuration="4.273396825s" podCreationTimestamp="2025-11-29 07:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:02.270920271 +0000 UTC m=+961.892996329" watchObservedRunningTime="2025-11-29 07:17:02.273396825 +0000 UTC m=+961.895472883" Nov 29 07:17:09 crc kubenswrapper[4828]: I1129 07:17:09.016338 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-r78sm" Nov 29 07:17:20 crc kubenswrapper[4828]: I1129 07:17:20.484802 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kjddj" Nov 29 07:17:21 crc kubenswrapper[4828]: I1129 07:17:21.383303 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"1633d26392ebea382803d3058f38f219a9f0d8a657e6d0b20657a4cf59fb8536"} Nov 29 07:17:22 crc kubenswrapper[4828]: I1129 07:17:22.397876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" event={"ID":"c683344b-cd77-447f-b375-c83eb16100b6","Type":"ContainerStarted","Data":"6ef0b53bf8e6afdfd1114ec23987980a30d2eb334b9387114eeba9bfafe667dc"} Nov 29 07:17:22 crc kubenswrapper[4828]: I1129 07:17:22.400887 4828 generic.go:334] "Generic (PLEG): container finished" podID="c4f218ed-01de-4cf0-a800-ca644528acc3" containerID="1633d26392ebea382803d3058f38f219a9f0d8a657e6d0b20657a4cf59fb8536" exitCode=0 Nov 29 07:17:22 crc kubenswrapper[4828]: I1129 07:17:22.400951 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerDied","Data":"1633d26392ebea382803d3058f38f219a9f0d8a657e6d0b20657a4cf59fb8536"} Nov 29 07:17:22 crc kubenswrapper[4828]: I1129 07:17:22.415773 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" podStartSLOduration=2.940955186 podStartE2EDuration="24.415736282s" podCreationTimestamp="2025-11-29 07:16:58 +0000 UTC" firstStartedPulling="2025-11-29 07:16:59.190292029 +0000 UTC m=+958.812368087" lastFinishedPulling="2025-11-29 07:17:20.665073125 +0000 UTC m=+980.287149183" observedRunningTime="2025-11-29 07:17:22.412516719 +0000 UTC m=+982.034592777" watchObservedRunningTime="2025-11-29 07:17:22.415736282 +0000 UTC m=+982.037812340" Nov 29 07:17:23 crc kubenswrapper[4828]: I1129 07:17:23.430865 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:17:24 crc kubenswrapper[4828]: I1129 07:17:24.423355 4828 generic.go:334] "Generic (PLEG): container finished" podID="c4f218ed-01de-4cf0-a800-ca644528acc3" containerID="fc1150c0f62d236644f775867f5f61eee1f8ca0085d1cfb029f1141717984be6" exitCode=0 Nov 29 07:17:24 crc kubenswrapper[4828]: I1129 07:17:24.423437 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerDied","Data":"fc1150c0f62d236644f775867f5f61eee1f8ca0085d1cfb029f1141717984be6"} Nov 29 07:17:25 crc kubenswrapper[4828]: I1129 07:17:25.430777 4828 generic.go:334] "Generic (PLEG): container finished" podID="c4f218ed-01de-4cf0-a800-ca644528acc3" containerID="7233cd1ba74631e5e1d598dd61ce2b84c4054f716998e4647762f5ef869e3448" exitCode=0 Nov 29 07:17:25 crc kubenswrapper[4828]: I1129 07:17:25.431569 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerDied","Data":"7233cd1ba74631e5e1d598dd61ce2b84c4054f716998e4647762f5ef869e3448"} Nov 29 07:17:26 crc kubenswrapper[4828]: I1129 07:17:26.442530 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"c94d76999e717e83b61150fe9350d496b50057789140b594e640905a7d7c2667"} Nov 29 07:17:26 crc kubenswrapper[4828]: I1129 07:17:26.442580 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"2f38cba7f04b7ee4c9a2228c958a76100f0f77dbc7cea7c1c7115058f0955251"} Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.452637 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"62f67f8304eef81a3c12e663c71bb2d1dce3b9cdbec1a5a095a329815b9dbb4f"} Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.453773 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"46392ce47fe0db9600cc3efe92c47c965eebe7e0f81b0fc19fee38679d4fdb46"} Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.550655 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.551707 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.554903 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.555292 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.555472 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-btzjj" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.564471 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.656327 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtbk\" (UniqueName: \"kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk\") pod \"openstack-operator-index-gkdw4\" (UID: \"bb105e97-fe51-4b06-9224-e68b121623b8\") " pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.757570 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtbk\" (UniqueName: \"kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk\") pod \"openstack-operator-index-gkdw4\" (UID: \"bb105e97-fe51-4b06-9224-e68b121623b8\") " pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.779981 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtbk\" (UniqueName: \"kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk\") pod \"openstack-operator-index-gkdw4\" (UID: \"bb105e97-fe51-4b06-9224-e68b121623b8\") " pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:17:27 crc kubenswrapper[4828]: I1129 07:17:27.891150 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:17:28 crc kubenswrapper[4828]: I1129 07:17:28.307691 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:17:28 crc kubenswrapper[4828]: I1129 07:17:28.463093 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"0aac4e50eb5a1ccd2ae92e19fc16e4fc50221db2c95afa35038924f6a5b91dc1"} Nov 29 07:17:28 crc kubenswrapper[4828]: I1129 07:17:28.464987 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkdw4" event={"ID":"bb105e97-fe51-4b06-9224-e68b121623b8","Type":"ContainerStarted","Data":"0f2b03e79c389d38ed782535521f0211b0538918fadae726b92c667de57b80c4"} Nov 29 07:17:32 crc kubenswrapper[4828]: I1129 07:17:32.746851 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.350965 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qgmcb"] Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.354421 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.358727 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qgmcb"] Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.438643 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf48b\" (UniqueName: \"kubernetes.io/projected/1819d352-6ff1-4f6a-9a9f-899c6e045c19-kube-api-access-vf48b\") pod \"openstack-operator-index-qgmcb\" (UID: \"1819d352-6ff1-4f6a-9a9f-899c6e045c19\") " pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.540041 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf48b\" (UniqueName: \"kubernetes.io/projected/1819d352-6ff1-4f6a-9a9f-899c6e045c19-kube-api-access-vf48b\") pod \"openstack-operator-index-qgmcb\" (UID: \"1819d352-6ff1-4f6a-9a9f-899c6e045c19\") " pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.566664 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf48b\" (UniqueName: \"kubernetes.io/projected/1819d352-6ff1-4f6a-9a9f-899c6e045c19-kube-api-access-vf48b\") pod \"openstack-operator-index-qgmcb\" (UID: \"1819d352-6ff1-4f6a-9a9f-899c6e045c19\") " pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:17:33 crc kubenswrapper[4828]: I1129 07:17:33.670300 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:17:34 crc kubenswrapper[4828]: I1129 07:17:34.105109 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qgmcb"] Nov 29 07:17:34 crc kubenswrapper[4828]: I1129 07:17:34.507079 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qgmcb" event={"ID":"1819d352-6ff1-4f6a-9a9f-899c6e045c19","Type":"ContainerStarted","Data":"f3d62564848ff69e34f2c34ad230fe66194d3478d2d455afca9ac5a519379ab8"} Nov 29 07:17:35 crc kubenswrapper[4828]: I1129 07:17:35.517772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-86b52" event={"ID":"c4f218ed-01de-4cf0-a800-ca644528acc3","Type":"ContainerStarted","Data":"9b741f2e6bced2dd4094617faf71864d40f048d329a8ae55f3c3e10d9cb1e7fe"} Nov 29 07:17:35 crc kubenswrapper[4828]: I1129 07:17:35.518020 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-86b52" Nov 29 07:17:35 crc kubenswrapper[4828]: I1129 07:17:35.522812 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-86b52" Nov 29 07:17:35 crc kubenswrapper[4828]: I1129 07:17:35.539802 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-86b52" podStartSLOduration=16.545287363 podStartE2EDuration="37.539778784s" podCreationTimestamp="2025-11-29 07:16:58 +0000 UTC" firstStartedPulling="2025-11-29 07:16:59.689765089 +0000 UTC m=+959.311841147" lastFinishedPulling="2025-11-29 07:17:20.68425651 +0000 UTC m=+980.306332568" observedRunningTime="2025-11-29 07:17:35.535475763 +0000 UTC m=+995.157551831" watchObservedRunningTime="2025-11-29 07:17:35.539778784 +0000 UTC m=+995.161854842" Nov 29 07:17:38 crc kubenswrapper[4828]: I1129 07:17:38.907349 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-klm74" Nov 29 07:17:39 crc kubenswrapper[4828]: I1129 07:17:39.493945 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-86b52" Nov 29 07:17:39 crc kubenswrapper[4828]: I1129 07:17:39.532913 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-86b52" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.404351 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.405661 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.435527 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.455633 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.455884 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.456002 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg8gt\" (UniqueName: \"kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.487022 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.487349 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.557170 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg8gt\" (UniqueName: \"kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.557256 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.557294 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.557920 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.557982 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.578452 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg8gt\" (UniqueName: \"kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt\") pod \"community-operators-jwbqv\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:41 crc kubenswrapper[4828]: I1129 07:17:41.734081 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:17:43 crc kubenswrapper[4828]: I1129 07:17:43.438121 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.587898 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.589687 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.598285 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.697043 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9l9\" (UniqueName: \"kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.697131 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.697195 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: W1129 07:17:44.708902 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8705c903_8693_4892_a4c1_d50a086db042.slice/crio-ecff091eb4c4219d1f872584c7dd43e98d566bf504ffa2592072770dd6423fa7 WatchSource:0}: Error finding container ecff091eb4c4219d1f872584c7dd43e98d566bf504ffa2592072770dd6423fa7: Status 404 returned error can't find the container with id ecff091eb4c4219d1f872584c7dd43e98d566bf504ffa2592072770dd6423fa7 Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.798731 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9l9\" (UniqueName: \"kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.798840 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.798902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.799555 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.799577 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.821136 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9l9\" (UniqueName: \"kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9\") pod \"redhat-marketplace-svsn6\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:44 crc kubenswrapper[4828]: I1129 07:17:44.924986 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:17:45 crc kubenswrapper[4828]: I1129 07:17:45.586676 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerStarted","Data":"ecff091eb4c4219d1f872584c7dd43e98d566bf504ffa2592072770dd6423fa7"} Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.384034 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.387367 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.390189 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.456032 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.456106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.456509 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfsvf\" (UniqueName: \"kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.557859 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.557922 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.557970 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfsvf\" (UniqueName: \"kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.558453 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.558476 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.578021 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfsvf\" (UniqueName: \"kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf\") pod \"certified-operators-2hfcx\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:48 crc kubenswrapper[4828]: I1129 07:17:48.715364 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.069058 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:17:52 crc kubenswrapper[4828]: W1129 07:17:52.078112 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e1ef8a_a9c4_41ed_bce5_73f3d8d33669.slice/crio-b73e22ae2c821ad41ac4a345f7d1198501775bca3a1068283b5cd53deae17047 WatchSource:0}: Error finding container b73e22ae2c821ad41ac4a345f7d1198501775bca3a1068283b5cd53deae17047: Status 404 returned error can't find the container with id b73e22ae2c821ad41ac4a345f7d1198501775bca3a1068283b5cd53deae17047 Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.106890 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:17:52 crc kubenswrapper[4828]: W1129 07:17:52.108652 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd962ede_c549_48d3_9cf3_aa649d254b4a.slice/crio-67f2287d6d2d0585f73f550609fa5a799e769f9a204c0e173ef66b43d17747e1 WatchSource:0}: Error finding container 67f2287d6d2d0585f73f550609fa5a799e769f9a204c0e173ef66b43d17747e1: Status 404 returned error can't find the container with id 67f2287d6d2d0585f73f550609fa5a799e769f9a204c0e173ef66b43d17747e1 Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.629828 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qgmcb" event={"ID":"1819d352-6ff1-4f6a-9a9f-899c6e045c19","Type":"ContainerStarted","Data":"d8e7d967f2ec01f8f32f7e8e9e15f8fb50a4a428acf7e01d762d2a3cbf227836"} Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.630693 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerStarted","Data":"b73e22ae2c821ad41ac4a345f7d1198501775bca3a1068283b5cd53deae17047"} Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.631733 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerStarted","Data":"67f2287d6d2d0585f73f550609fa5a799e769f9a204c0e173ef66b43d17747e1"} Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.633729 4828 generic.go:334] "Generic (PLEG): container finished" podID="8705c903-8693-4892-a4c1-d50a086db042" containerID="eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f" exitCode=0 Nov 29 07:17:52 crc kubenswrapper[4828]: I1129 07:17:52.633760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerDied","Data":"eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f"} Nov 29 07:17:54 crc kubenswrapper[4828]: I1129 07:17:54.665074 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qgmcb" podStartSLOduration=4.027552692 podStartE2EDuration="21.665042252s" podCreationTimestamp="2025-11-29 07:17:33 +0000 UTC" firstStartedPulling="2025-11-29 07:17:34.112699248 +0000 UTC m=+993.734775306" lastFinishedPulling="2025-11-29 07:17:51.750188808 +0000 UTC m=+1011.372264866" observedRunningTime="2025-11-29 07:17:54.657883597 +0000 UTC m=+1014.279959675" watchObservedRunningTime="2025-11-29 07:17:54.665042252 +0000 UTC m=+1014.287118330" Nov 29 07:17:55 crc kubenswrapper[4828]: I1129 07:17:55.653003 4828 generic.go:334] "Generic (PLEG): container finished" podID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerID="fc4d40291989e1d2d8e26deb0b2e3407278d63e4c3c35969771aae72d9180e4a" exitCode=0 Nov 29 07:17:55 crc kubenswrapper[4828]: I1129 07:17:55.653208 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerDied","Data":"fc4d40291989e1d2d8e26deb0b2e3407278d63e4c3c35969771aae72d9180e4a"} Nov 29 07:17:55 crc kubenswrapper[4828]: I1129 07:17:55.655837 4828 generic.go:334] "Generic (PLEG): container finished" podID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerID="cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b" exitCode=0 Nov 29 07:17:55 crc kubenswrapper[4828]: I1129 07:17:55.656808 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerDied","Data":"cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b"} Nov 29 07:17:58 crc kubenswrapper[4828]: I1129 07:17:58.677367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkdw4" event={"ID":"bb105e97-fe51-4b06-9224-e68b121623b8","Type":"ContainerStarted","Data":"3f93bf5cf2811495fc83b66f2e362b24ac3fdc4456eeacfc957653a0ac3d339f"} Nov 29 07:17:58 crc kubenswrapper[4828]: I1129 07:17:58.677553 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gkdw4" podUID="bb105e97-fe51-4b06-9224-e68b121623b8" containerName="registry-server" containerID="cri-o://3f93bf5cf2811495fc83b66f2e362b24ac3fdc4456eeacfc957653a0ac3d339f" gracePeriod=2 Nov 29 07:17:59 crc kubenswrapper[4828]: I1129 07:17:59.683580 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb105e97-fe51-4b06-9224-e68b121623b8" containerID="3f93bf5cf2811495fc83b66f2e362b24ac3fdc4456eeacfc957653a0ac3d339f" exitCode=0 Nov 29 07:17:59 crc kubenswrapper[4828]: I1129 07:17:59.683682 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkdw4" event={"ID":"bb105e97-fe51-4b06-9224-e68b121623b8","Type":"ContainerDied","Data":"3f93bf5cf2811495fc83b66f2e362b24ac3fdc4456eeacfc957653a0ac3d339f"} Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.497612 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.579470 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtbk\" (UniqueName: \"kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk\") pod \"bb105e97-fe51-4b06-9224-e68b121623b8\" (UID: \"bb105e97-fe51-4b06-9224-e68b121623b8\") " Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.586592 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk" (OuterVolumeSpecName: "kube-api-access-dmtbk") pod "bb105e97-fe51-4b06-9224-e68b121623b8" (UID: "bb105e97-fe51-4b06-9224-e68b121623b8"). InnerVolumeSpecName "kube-api-access-dmtbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.682380 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtbk\" (UniqueName: \"kubernetes.io/projected/bb105e97-fe51-4b06-9224-e68b121623b8-kube-api-access-dmtbk\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.702937 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkdw4" event={"ID":"bb105e97-fe51-4b06-9224-e68b121623b8","Type":"ContainerDied","Data":"0f2b03e79c389d38ed782535521f0211b0538918fadae726b92c667de57b80c4"} Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.703004 4828 scope.go:117] "RemoveContainer" containerID="3f93bf5cf2811495fc83b66f2e362b24ac3fdc4456eeacfc957653a0ac3d339f" Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.703042 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkdw4" Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.732093 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:18:00 crc kubenswrapper[4828]: I1129 07:18:00.736564 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gkdw4"] Nov 29 07:18:01 crc kubenswrapper[4828]: I1129 07:18:01.420903 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb105e97-fe51-4b06-9224-e68b121623b8" path="/var/lib/kubelet/pods/bb105e97-fe51-4b06-9224-e68b121623b8/volumes" Nov 29 07:18:03 crc kubenswrapper[4828]: I1129 07:18:03.671006 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:18:03 crc kubenswrapper[4828]: I1129 07:18:03.671451 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:18:03 crc kubenswrapper[4828]: I1129 07:18:03.704050 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:18:03 crc kubenswrapper[4828]: I1129 07:18:03.749664 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qgmcb" Nov 29 07:18:04 crc kubenswrapper[4828]: I1129 07:18:04.733489 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerStarted","Data":"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45"} Nov 29 07:18:04 crc kubenswrapper[4828]: I1129 07:18:04.735873 4828 generic.go:334] "Generic (PLEG): container finished" podID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerID="4eab77df3fcf7e7d73b4aa73603601534defb12eb69fcc98c76be51f57204884" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4828]: I1129 07:18:04.736005 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerDied","Data":"4eab77df3fcf7e7d73b4aa73603601534defb12eb69fcc98c76be51f57204884"} Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.746631 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerStarted","Data":"ba81a2edef830a6cc30ed703fc60733324aae5afd4c0e4d13f0153898f8b06b3"} Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.749594 4828 generic.go:334] "Generic (PLEG): container finished" podID="8705c903-8693-4892-a4c1-d50a086db042" containerID="fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43" exitCode=0 Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.749666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerDied","Data":"fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43"} Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.752782 4828 generic.go:334] "Generic (PLEG): container finished" podID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerID="d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45" exitCode=0 Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.752824 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerDied","Data":"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45"} Nov 29 07:18:05 crc kubenswrapper[4828]: I1129 07:18:05.771306 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svsn6" podStartSLOduration=12.132015271 podStartE2EDuration="21.771255616s" podCreationTimestamp="2025-11-29 07:17:44 +0000 UTC" firstStartedPulling="2025-11-29 07:17:55.654826596 +0000 UTC m=+1015.276902654" lastFinishedPulling="2025-11-29 07:18:05.294066941 +0000 UTC m=+1024.916142999" observedRunningTime="2025-11-29 07:18:05.768652418 +0000 UTC m=+1025.390728496" watchObservedRunningTime="2025-11-29 07:18:05.771255616 +0000 UTC m=+1025.393331674" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.077349 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh"] Nov 29 07:18:06 crc kubenswrapper[4828]: E1129 07:18:06.077791 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb105e97-fe51-4b06-9224-e68b121623b8" containerName="registry-server" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.077824 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb105e97-fe51-4b06-9224-e68b121623b8" containerName="registry-server" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.078003 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb105e97-fe51-4b06-9224-e68b121623b8" containerName="registry-server" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.079230 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.082117 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-s2w7j" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.087868 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh"] Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.173198 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.173346 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.173411 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79sr\" (UniqueName: \"kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.275404 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.275740 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.275912 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n79sr\" (UniqueName: \"kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.276123 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.276224 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.310624 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n79sr\" (UniqueName: \"kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr\") pod \"7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.401149 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:06 crc kubenswrapper[4828]: I1129 07:18:06.862418 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh"] Nov 29 07:18:07 crc kubenswrapper[4828]: I1129 07:18:07.769397 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerStarted","Data":"d499f7dea325a6054c43586d0cdbb0fefe5c07a9a1a4a97ef207a05673ce1fb7"} Nov 29 07:18:07 crc kubenswrapper[4828]: I1129 07:18:07.769440 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerStarted","Data":"8d60af15df85437754ac3b86155f3fd8274c6602a544e355a112d8cb67e95d48"} Nov 29 07:18:08 crc kubenswrapper[4828]: I1129 07:18:08.780850 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerStarted","Data":"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74"} Nov 29 07:18:08 crc kubenswrapper[4828]: I1129 07:18:08.784424 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerID="d499f7dea325a6054c43586d0cdbb0fefe5c07a9a1a4a97ef207a05673ce1fb7" exitCode=0 Nov 29 07:18:08 crc kubenswrapper[4828]: I1129 07:18:08.784483 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerDied","Data":"d499f7dea325a6054c43586d0cdbb0fefe5c07a9a1a4a97ef207a05673ce1fb7"} Nov 29 07:18:08 crc kubenswrapper[4828]: I1129 07:18:08.800849 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2hfcx" podStartSLOduration=8.667033351 podStartE2EDuration="20.800815576s" podCreationTimestamp="2025-11-29 07:17:48 +0000 UTC" firstStartedPulling="2025-11-29 07:17:55.657835934 +0000 UTC m=+1015.279911992" lastFinishedPulling="2025-11-29 07:18:07.791618159 +0000 UTC m=+1027.413694217" observedRunningTime="2025-11-29 07:18:08.797388818 +0000 UTC m=+1028.419464896" watchObservedRunningTime="2025-11-29 07:18:08.800815576 +0000 UTC m=+1028.422891634" Nov 29 07:18:11 crc kubenswrapper[4828]: I1129 07:18:11.486869 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:11 crc kubenswrapper[4828]: I1129 07:18:11.487366 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:11 crc kubenswrapper[4828]: I1129 07:18:11.807201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerStarted","Data":"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5"} Nov 29 07:18:11 crc kubenswrapper[4828]: I1129 07:18:11.829691 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jwbqv" podStartSLOduration=16.792598977 podStartE2EDuration="30.829669309s" podCreationTimestamp="2025-11-29 07:17:41 +0000 UTC" firstStartedPulling="2025-11-29 07:17:55.656880639 +0000 UTC m=+1015.278956697" lastFinishedPulling="2025-11-29 07:18:09.693950971 +0000 UTC m=+1029.316027029" observedRunningTime="2025-11-29 07:18:11.829663589 +0000 UTC m=+1031.451739667" watchObservedRunningTime="2025-11-29 07:18:11.829669309 +0000 UTC m=+1031.451745367" Nov 29 07:18:14 crc kubenswrapper[4828]: I1129 07:18:14.926092 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:14 crc kubenswrapper[4828]: I1129 07:18:14.926581 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:14 crc kubenswrapper[4828]: I1129 07:18:14.964283 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:15 crc kubenswrapper[4828]: I1129 07:18:15.876944 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:17 crc kubenswrapper[4828]: I1129 07:18:17.574636 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:18:17 crc kubenswrapper[4828]: I1129 07:18:17.847841 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svsn6" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="registry-server" containerID="cri-o://ba81a2edef830a6cc30ed703fc60733324aae5afd4c0e4d13f0153898f8b06b3" gracePeriod=2 Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.715925 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.716335 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.755741 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.856536 4828 generic.go:334] "Generic (PLEG): container finished" podID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerID="ba81a2edef830a6cc30ed703fc60733324aae5afd4c0e4d13f0153898f8b06b3" exitCode=0 Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.856617 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerDied","Data":"ba81a2edef830a6cc30ed703fc60733324aae5afd4c0e4d13f0153898f8b06b3"} Nov 29 07:18:18 crc kubenswrapper[4828]: I1129 07:18:18.902351 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.058875 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.219343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l9l9\" (UniqueName: \"kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9\") pod \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.219414 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content\") pod \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.219510 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities\") pod \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\" (UID: \"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669\") " Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.220505 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities" (OuterVolumeSpecName: "utilities") pod "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" (UID: "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.226479 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9" (OuterVolumeSpecName: "kube-api-access-5l9l9") pod "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" (UID: "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669"). InnerVolumeSpecName "kube-api-access-5l9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.237082 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" (UID: "67e1ef8a-a9c4-41ed-bce5-73f3d8d33669"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.320702 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l9l9\" (UniqueName: \"kubernetes.io/projected/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-kube-api-access-5l9l9\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.320757 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.320772 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.865693 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svsn6" event={"ID":"67e1ef8a-a9c4-41ed-bce5-73f3d8d33669","Type":"ContainerDied","Data":"b73e22ae2c821ad41ac4a345f7d1198501775bca3a1068283b5cd53deae17047"} Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.865836 4828 scope.go:117] "RemoveContainer" containerID="ba81a2edef830a6cc30ed703fc60733324aae5afd4c0e4d13f0153898f8b06b3" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.865743 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svsn6" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.891783 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.896167 4828 scope.go:117] "RemoveContainer" containerID="4eab77df3fcf7e7d73b4aa73603601534defb12eb69fcc98c76be51f57204884" Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.902946 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svsn6"] Nov 29 07:18:19 crc kubenswrapper[4828]: I1129 07:18:19.916563 4828 scope.go:117] "RemoveContainer" containerID="fc4d40291989e1d2d8e26deb0b2e3407278d63e4c3c35969771aae72d9180e4a" Nov 29 07:18:20 crc kubenswrapper[4828]: I1129 07:18:20.791724 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:18:20 crc kubenswrapper[4828]: I1129 07:18:20.872238 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2hfcx" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="registry-server" containerID="cri-o://4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74" gracePeriod=2 Nov 29 07:18:21 crc kubenswrapper[4828]: I1129 07:18:21.438956 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" path="/var/lib/kubelet/pods/67e1ef8a-a9c4-41ed-bce5-73f3d8d33669/volumes" Nov 29 07:18:21 crc kubenswrapper[4828]: I1129 07:18:21.734486 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:18:21 crc kubenswrapper[4828]: I1129 07:18:21.734586 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:18:21 crc kubenswrapper[4828]: I1129 07:18:21.783986 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:18:21 crc kubenswrapper[4828]: I1129 07:18:21.919686 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.274862 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.466053 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfsvf\" (UniqueName: \"kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf\") pod \"cd962ede-c549-48d3-9cf3-aa649d254b4a\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.466498 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content\") pod \"cd962ede-c549-48d3-9cf3-aa649d254b4a\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.466595 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities\") pod \"cd962ede-c549-48d3-9cf3-aa649d254b4a\" (UID: \"cd962ede-c549-48d3-9cf3-aa649d254b4a\") " Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.467547 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities" (OuterVolumeSpecName: "utilities") pod "cd962ede-c549-48d3-9cf3-aa649d254b4a" (UID: "cd962ede-c549-48d3-9cf3-aa649d254b4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.470798 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.477357 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf" (OuterVolumeSpecName: "kube-api-access-pfsvf") pod "cd962ede-c549-48d3-9cf3-aa649d254b4a" (UID: "cd962ede-c549-48d3-9cf3-aa649d254b4a"). InnerVolumeSpecName "kube-api-access-pfsvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.515448 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd962ede-c549-48d3-9cf3-aa649d254b4a" (UID: "cd962ede-c549-48d3-9cf3-aa649d254b4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.572186 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfsvf\" (UniqueName: \"kubernetes.io/projected/cd962ede-c549-48d3-9cf3-aa649d254b4a-kube-api-access-pfsvf\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.572224 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd962ede-c549-48d3-9cf3-aa649d254b4a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.886133 4828 generic.go:334] "Generic (PLEG): container finished" podID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerID="4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74" exitCode=0 Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.886187 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hfcx" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.886241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerDied","Data":"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74"} Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.886782 4828 scope.go:117] "RemoveContainer" containerID="4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.887128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hfcx" event={"ID":"cd962ede-c549-48d3-9cf3-aa649d254b4a","Type":"ContainerDied","Data":"67f2287d6d2d0585f73f550609fa5a799e769f9a204c0e173ef66b43d17747e1"} Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.889852 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerID="167c039fa96931dad592578d6016cfdf97f8d3ee2b82a0fb6236533ec86fb366" exitCode=0 Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.889981 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerDied","Data":"167c039fa96931dad592578d6016cfdf97f8d3ee2b82a0fb6236533ec86fb366"} Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.920780 4828 scope.go:117] "RemoveContainer" containerID="d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.930581 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.943863 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2hfcx"] Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.944945 4828 scope.go:117] "RemoveContainer" containerID="cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.972300 4828 scope.go:117] "RemoveContainer" containerID="4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74" Nov 29 07:18:22 crc kubenswrapper[4828]: E1129 07:18:22.972977 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74\": container with ID starting with 4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74 not found: ID does not exist" containerID="4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.973026 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74"} err="failed to get container status \"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74\": rpc error: code = NotFound desc = could not find container \"4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74\": container with ID starting with 4a83ac88382023c2e5fbc7628af2f09d8a74ea94fc2dcacaeb5131e11c229c74 not found: ID does not exist" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.973056 4828 scope.go:117] "RemoveContainer" containerID="d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45" Nov 29 07:18:22 crc kubenswrapper[4828]: E1129 07:18:22.974261 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45\": container with ID starting with d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45 not found: ID does not exist" containerID="d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.974302 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45"} err="failed to get container status \"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45\": rpc error: code = NotFound desc = could not find container \"d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45\": container with ID starting with d3c623d98c6d025247a3e307c8eba4ee97beac76bda65b3a5b5b37aa29b15a45 not found: ID does not exist" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.974319 4828 scope.go:117] "RemoveContainer" containerID="cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b" Nov 29 07:18:22 crc kubenswrapper[4828]: E1129 07:18:22.974758 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b\": container with ID starting with cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b not found: ID does not exist" containerID="cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b" Nov 29 07:18:22 crc kubenswrapper[4828]: I1129 07:18:22.974803 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b"} err="failed to get container status \"cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b\": rpc error: code = NotFound desc = could not find container \"cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b\": container with ID starting with cb3699f268cfdcf91051c6273d0943a0e4db32b243d7e318d80e0e8493b3877b not found: ID does not exist" Nov 29 07:18:23 crc kubenswrapper[4828]: I1129 07:18:23.420245 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" path="/var/lib/kubelet/pods/cd962ede-c549-48d3-9cf3-aa649d254b4a/volumes" Nov 29 07:18:25 crc kubenswrapper[4828]: I1129 07:18:25.614975 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 07:18:25 crc kubenswrapper[4828]: I1129 07:18:25.696388 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:18:25 crc kubenswrapper[4828]: I1129 07:18:25.697007 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jr7qs" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="registry-server" containerID="cri-o://38c0562c50858a8ed751e33673c0d88dac47f4c463b2a8934984585f0b143ccc" gracePeriod=2 Nov 29 07:18:27 crc kubenswrapper[4828]: I1129 07:18:27.961056 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerID="ff66f2f2077f72e3f6b72acfc1e2f327bff7dbba86c46a67fc8fd244714dac2b" exitCode=0 Nov 29 07:18:27 crc kubenswrapper[4828]: I1129 07:18:27.961208 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerDied","Data":"ff66f2f2077f72e3f6b72acfc1e2f327bff7dbba86c46a67fc8fd244714dac2b"} Nov 29 07:18:27 crc kubenswrapper[4828]: I1129 07:18:27.964029 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerID="38c0562c50858a8ed751e33673c0d88dac47f4c463b2a8934984585f0b143ccc" exitCode=0 Nov 29 07:18:27 crc kubenswrapper[4828]: I1129 07:18:27.964070 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerDied","Data":"38c0562c50858a8ed751e33673c0d88dac47f4c463b2a8934984585f0b143ccc"} Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.330552 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.469560 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs5gg\" (UniqueName: \"kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg\") pod \"5d8cfc2c-2879-4633-95e5-8ea070145a47\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.470078 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities\") pod \"5d8cfc2c-2879-4633-95e5-8ea070145a47\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.470233 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content\") pod \"5d8cfc2c-2879-4633-95e5-8ea070145a47\" (UID: \"5d8cfc2c-2879-4633-95e5-8ea070145a47\") " Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.471447 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities" (OuterVolumeSpecName: "utilities") pod "5d8cfc2c-2879-4633-95e5-8ea070145a47" (UID: "5d8cfc2c-2879-4633-95e5-8ea070145a47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.480522 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg" (OuterVolumeSpecName: "kube-api-access-hs5gg") pod "5d8cfc2c-2879-4633-95e5-8ea070145a47" (UID: "5d8cfc2c-2879-4633-95e5-8ea070145a47"). InnerVolumeSpecName "kube-api-access-hs5gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.522118 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d8cfc2c-2879-4633-95e5-8ea070145a47" (UID: "5d8cfc2c-2879-4633-95e5-8ea070145a47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.572400 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs5gg\" (UniqueName: \"kubernetes.io/projected/5d8cfc2c-2879-4633-95e5-8ea070145a47-kube-api-access-hs5gg\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.572689 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.572775 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8cfc2c-2879-4633-95e5-8ea070145a47-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.976558 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jr7qs" Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.977181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jr7qs" event={"ID":"5d8cfc2c-2879-4633-95e5-8ea070145a47","Type":"ContainerDied","Data":"92c9edf1a88dde6e0587c604f4af074ad93050cf46ea58c4f23320ce579f5ba9"} Nov 29 07:18:28 crc kubenswrapper[4828]: I1129 07:18:28.977241 4828 scope.go:117] "RemoveContainer" containerID="38c0562c50858a8ed751e33673c0d88dac47f4c463b2a8934984585f0b143ccc" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.014291 4828 scope.go:117] "RemoveContainer" containerID="8fef37a1037ba43c080c66220abf4fbaffb24fbea4ea732ab3e2c5adea64b4c6" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.022052 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.027455 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jr7qs"] Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.034001 4828 scope.go:117] "RemoveContainer" containerID="880ed5fac18c378e12cfc2789d564af6140afa92b6583d2a3e610cc045f8f331" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.221808 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.399833 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle\") pod \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.399956 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util\") pod \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.400084 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n79sr\" (UniqueName: \"kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr\") pod \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\" (UID: \"bb3be18a-9791-4cc9-92bf-685171bfdaf9\") " Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.401433 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle" (OuterVolumeSpecName: "bundle") pod "bb3be18a-9791-4cc9-92bf-685171bfdaf9" (UID: "bb3be18a-9791-4cc9-92bf-685171bfdaf9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.404469 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr" (OuterVolumeSpecName: "kube-api-access-n79sr") pod "bb3be18a-9791-4cc9-92bf-685171bfdaf9" (UID: "bb3be18a-9791-4cc9-92bf-685171bfdaf9"). InnerVolumeSpecName "kube-api-access-n79sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.411250 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util" (OuterVolumeSpecName: "util") pod "bb3be18a-9791-4cc9-92bf-685171bfdaf9" (UID: "bb3be18a-9791-4cc9-92bf-685171bfdaf9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.423832 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" path="/var/lib/kubelet/pods/5d8cfc2c-2879-4633-95e5-8ea070145a47/volumes" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.502049 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n79sr\" (UniqueName: \"kubernetes.io/projected/bb3be18a-9791-4cc9-92bf-685171bfdaf9-kube-api-access-n79sr\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.502097 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.502109 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb3be18a-9791-4cc9-92bf-685171bfdaf9-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.985347 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.985328 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh" event={"ID":"bb3be18a-9791-4cc9-92bf-685171bfdaf9","Type":"ContainerDied","Data":"8d60af15df85437754ac3b86155f3fd8274c6602a544e355a112d8cb67e95d48"} Nov 29 07:18:29 crc kubenswrapper[4828]: I1129 07:18:29.985527 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d60af15df85437754ac3b86155f3fd8274c6602a544e355a112d8cb67e95d48" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.184451 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7"] Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185255 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="pull" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185317 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="pull" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185367 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="extract" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185378 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="extract" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185398 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185410 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185428 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185441 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185453 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185463 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185481 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185491 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185505 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185515 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185532 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="util" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185542 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="util" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185557 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185569 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="extract-content" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185583 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185593 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185627 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185638 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="extract-utilities" Nov 29 07:18:33 crc kubenswrapper[4828]: E1129 07:18:33.185656 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185666 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185898 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3be18a-9791-4cc9-92bf-685171bfdaf9" containerName="extract" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185923 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd962ede-c549-48d3-9cf3-aa649d254b4a" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.185994 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e1ef8a-a9c4-41ed-bce5-73f3d8d33669" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.186018 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8cfc2c-2879-4633-95e5-8ea070145a47" containerName="registry-server" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.186818 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.188849 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-s5t4t" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.215358 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7"] Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.344350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmgn\" (UniqueName: \"kubernetes.io/projected/e839e496-a573-4f7b-819e-5a8f24c20689-kube-api-access-hfmgn\") pod \"openstack-operator-controller-operator-7f7c9dc57b-dhcn7\" (UID: \"e839e496-a573-4f7b-819e-5a8f24c20689\") " pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.444968 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmgn\" (UniqueName: \"kubernetes.io/projected/e839e496-a573-4f7b-819e-5a8f24c20689-kube-api-access-hfmgn\") pod \"openstack-operator-controller-operator-7f7c9dc57b-dhcn7\" (UID: \"e839e496-a573-4f7b-819e-5a8f24c20689\") " pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.469407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmgn\" (UniqueName: \"kubernetes.io/projected/e839e496-a573-4f7b-819e-5a8f24c20689-kube-api-access-hfmgn\") pod \"openstack-operator-controller-operator-7f7c9dc57b-dhcn7\" (UID: \"e839e496-a573-4f7b-819e-5a8f24c20689\") " pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.504476 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:33 crc kubenswrapper[4828]: I1129 07:18:33.928342 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7"] Nov 29 07:18:34 crc kubenswrapper[4828]: I1129 07:18:34.030526 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" event={"ID":"e839e496-a573-4f7b-819e-5a8f24c20689","Type":"ContainerStarted","Data":"4badaae5abc1952730a6fe9cafb6ffd9a3605f270d32426fdcbafd051c3566d1"} Nov 29 07:18:41 crc kubenswrapper[4828]: I1129 07:18:41.486977 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:41 crc kubenswrapper[4828]: I1129 07:18:41.487668 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:41 crc kubenswrapper[4828]: I1129 07:18:41.487740 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:18:41 crc kubenswrapper[4828]: I1129 07:18:41.488454 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:18:41 crc kubenswrapper[4828]: I1129 07:18:41.488519 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36" gracePeriod=600 Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.110032 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36" exitCode=0 Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.110091 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36"} Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.110835 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64"} Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.110943 4828 scope.go:117] "RemoveContainer" containerID="f5b914bfefdcc07cd9bb4f5df5d162e71875a1700dbc77fcde461a09b944198b" Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.114781 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" event={"ID":"e839e496-a573-4f7b-819e-5a8f24c20689","Type":"ContainerStarted","Data":"0234187e5cf42a359beae96ddf7b1102c2ccbb55cd05722f3217f950a8a52853"} Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.114942 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:18:42 crc kubenswrapper[4828]: I1129 07:18:42.160623 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" podStartSLOduration=1.585663651 podStartE2EDuration="9.160577701s" podCreationTimestamp="2025-11-29 07:18:33 +0000 UTC" firstStartedPulling="2025-11-29 07:18:33.939379728 +0000 UTC m=+1053.561455786" lastFinishedPulling="2025-11-29 07:18:41.514293778 +0000 UTC m=+1061.136369836" observedRunningTime="2025-11-29 07:18:42.159944855 +0000 UTC m=+1061.782020903" watchObservedRunningTime="2025-11-29 07:18:42.160577701 +0000 UTC m=+1061.782653759" Nov 29 07:18:53 crc kubenswrapper[4828]: I1129 07:18:53.507321 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7f7c9dc57b-dhcn7" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.949188 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv"] Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.951335 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.955014 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc"] Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.956245 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.958980 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6twt2" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.962331 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv"] Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.969943 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw"] Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.971501 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.973253 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qzg9r" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.974024 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-869cb" Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.987948 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc"] Nov 29 07:19:19 crc kubenswrapper[4828]: I1129 07:19:19.992822 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.007594 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.008821 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.016406 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xsv5j" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.026374 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.028860 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbgr\" (UniqueName: \"kubernetes.io/projected/f53d1403-e6c3-4696-bc32-7b711c38083e-kube-api-access-xfbgr\") pod \"barbican-operator-controller-manager-7d9dfd778-jrkpv\" (UID: \"f53d1403-e6c3-4696-bc32-7b711c38083e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.028922 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptzq4\" (UniqueName: \"kubernetes.io/projected/29d5d952-52dc-4a17-8f00-fa65fda896d0-kube-api-access-ptzq4\") pod \"cinder-operator-controller-manager-859b6ccc6-s9ddc\" (UID: \"29d5d952-52dc-4a17-8f00-fa65fda896d0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.029004 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6kn\" (UniqueName: \"kubernetes.io/projected/a54ef84a-2f7d-47be-a9fd-699a627b3d91-kube-api-access-4w6kn\") pod \"designate-operator-controller-manager-78b4bc895b-dkprw\" (UID: \"a54ef84a-2f7d-47be-a9fd-699a627b3d91\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.038014 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.044994 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.048864 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-v5v5r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.073393 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.096883 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.098160 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.100941 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.108398 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dn2pz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.108777 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.112304 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c6q6d" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.112503 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.114661 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.115839 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.117909 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6w7j8" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130025 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptzq4\" (UniqueName: \"kubernetes.io/projected/29d5d952-52dc-4a17-8f00-fa65fda896d0-kube-api-access-ptzq4\") pod \"cinder-operator-controller-manager-859b6ccc6-s9ddc\" (UID: \"29d5d952-52dc-4a17-8f00-fa65fda896d0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130097 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvcm\" (UniqueName: \"kubernetes.io/projected/8152b24c-fd27-443d-a35e-1ca6e4a5cf3e-kube-api-access-8cvcm\") pod \"heat-operator-controller-manager-f569bc5bd-7n76r\" (UID: \"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e\") " pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130137 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbrb7\" (UniqueName: \"kubernetes.io/projected/1048c045-97cc-4506-a0ad-48a8f47366e5-kube-api-access-dbrb7\") pod \"glance-operator-controller-manager-668d9c48b9-mvwrz\" (UID: \"1048c045-97cc-4506-a0ad-48a8f47366e5\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130202 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6kn\" (UniqueName: \"kubernetes.io/projected/a54ef84a-2f7d-47be-a9fd-699a627b3d91-kube-api-access-4w6kn\") pod \"designate-operator-controller-manager-78b4bc895b-dkprw\" (UID: \"a54ef84a-2f7d-47be-a9fd-699a627b3d91\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130256 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wss79\" (UniqueName: \"kubernetes.io/projected/741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7-kube-api-access-wss79\") pod \"horizon-operator-controller-manager-68c6d99b8f-qpxtq\" (UID: \"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.130307 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbgr\" (UniqueName: \"kubernetes.io/projected/f53d1403-e6c3-4696-bc32-7b711c38083e-kube-api-access-xfbgr\") pod \"barbican-operator-controller-manager-7d9dfd778-jrkpv\" (UID: \"f53d1403-e6c3-4696-bc32-7b711c38083e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.140226 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.152002 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.167368 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.171355 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.172743 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.173093 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6kn\" (UniqueName: \"kubernetes.io/projected/a54ef84a-2f7d-47be-a9fd-699a627b3d91-kube-api-access-4w6kn\") pod \"designate-operator-controller-manager-78b4bc895b-dkprw\" (UID: \"a54ef84a-2f7d-47be-a9fd-699a627b3d91\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.173623 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbgr\" (UniqueName: \"kubernetes.io/projected/f53d1403-e6c3-4696-bc32-7b711c38083e-kube-api-access-xfbgr\") pod \"barbican-operator-controller-manager-7d9dfd778-jrkpv\" (UID: \"f53d1403-e6c3-4696-bc32-7b711c38083e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.176224 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xr8gc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.186000 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptzq4\" (UniqueName: \"kubernetes.io/projected/29d5d952-52dc-4a17-8f00-fa65fda896d0-kube-api-access-ptzq4\") pod \"cinder-operator-controller-manager-859b6ccc6-s9ddc\" (UID: \"29d5d952-52dc-4a17-8f00-fa65fda896d0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.213081 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.214095 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.216734 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.221004 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-btt87" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.229781 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232584 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldg5\" (UniqueName: \"kubernetes.io/projected/5d8c92ab-128c-41fa-8ae1-25b2c0776232-kube-api-access-cldg5\") pod \"ironic-operator-controller-manager-6c548fd776-xtdv5\" (UID: \"5d8c92ab-128c-41fa-8ae1-25b2c0776232\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232634 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232674 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvcm\" (UniqueName: \"kubernetes.io/projected/8152b24c-fd27-443d-a35e-1ca6e4a5cf3e-kube-api-access-8cvcm\") pod \"heat-operator-controller-manager-f569bc5bd-7n76r\" (UID: \"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e\") " pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbrb7\" (UniqueName: \"kubernetes.io/projected/1048c045-97cc-4506-a0ad-48a8f47366e5-kube-api-access-dbrb7\") pod \"glance-operator-controller-manager-668d9c48b9-mvwrz\" (UID: \"1048c045-97cc-4506-a0ad-48a8f47366e5\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232720 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckht\" (UniqueName: \"kubernetes.io/projected/57cd6967-e631-48d7-bbd4-856ac77f592b-kube-api-access-5ckht\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232751 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wch6f\" (UniqueName: \"kubernetes.io/projected/2793e6a5-22f6-4562-8253-c7c6993728fc-kube-api-access-wch6f\") pod \"keystone-operator-controller-manager-546d4bdf48-s9v88\" (UID: \"2793e6a5-22f6-4562-8253-c7c6993728fc\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.232790 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wss79\" (UniqueName: \"kubernetes.io/projected/741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7-kube-api-access-wss79\") pod \"horizon-operator-controller-manager-68c6d99b8f-qpxtq\" (UID: \"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.243566 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.245235 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.251440 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wss79\" (UniqueName: \"kubernetes.io/projected/741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7-kube-api-access-wss79\") pod \"horizon-operator-controller-manager-68c6d99b8f-qpxtq\" (UID: \"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.251941 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kkjh9" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.257743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvcm\" (UniqueName: \"kubernetes.io/projected/8152b24c-fd27-443d-a35e-1ca6e4a5cf3e-kube-api-access-8cvcm\") pod \"heat-operator-controller-manager-f569bc5bd-7n76r\" (UID: \"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e\") " pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.266957 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.268391 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.274429 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w7hff" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.278194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbrb7\" (UniqueName: \"kubernetes.io/projected/1048c045-97cc-4506-a0ad-48a8f47366e5-kube-api-access-dbrb7\") pod \"glance-operator-controller-manager-668d9c48b9-mvwrz\" (UID: \"1048c045-97cc-4506-a0ad-48a8f47366e5\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.305351 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.306553 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.309717 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.321473 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.334898 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.335246 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9fk\" (UniqueName: \"kubernetes.io/projected/0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d-kube-api-access-zd9fk\") pod \"manila-operator-controller-manager-6546668bfd-zr4sc\" (UID: \"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.336545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtngw\" (UniqueName: \"kubernetes.io/projected/cd5cbb55-3997-45b7-9452-63f8354cf069-kube-api-access-gtngw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jsbsw\" (UID: \"cd5cbb55-3997-45b7-9452-63f8354cf069\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.336741 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ckht\" (UniqueName: \"kubernetes.io/projected/57cd6967-e631-48d7-bbd4-856ac77f592b-kube-api-access-5ckht\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.336922 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wch6f\" (UniqueName: \"kubernetes.io/projected/2793e6a5-22f6-4562-8253-c7c6993728fc-kube-api-access-wch6f\") pod \"keystone-operator-controller-manager-546d4bdf48-s9v88\" (UID: \"2793e6a5-22f6-4562-8253-c7c6993728fc\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.337115 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fpt\" (UniqueName: \"kubernetes.io/projected/7264f040-a8ce-49f1-8422-0b5d03b79531-kube-api-access-87fpt\") pod \"mariadb-operator-controller-manager-56bbcc9d85-c98mh\" (UID: \"7264f040-a8ce-49f1-8422-0b5d03b79531\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.337233 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldg5\" (UniqueName: \"kubernetes.io/projected/5d8c92ab-128c-41fa-8ae1-25b2c0776232-kube-api-access-cldg5\") pod \"ironic-operator-controller-manager-6c548fd776-xtdv5\" (UID: \"5d8c92ab-128c-41fa-8ae1-25b2c0776232\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.336564 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.338202 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert podName:57cd6967-e631-48d7-bbd4-856ac77f592b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:20.838154731 +0000 UTC m=+1100.460230799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert") pod "infra-operator-controller-manager-57548d458d-ntfvp" (UID: "57cd6967-e631-48d7-bbd4-856ac77f592b") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.340067 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.343546 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.379603 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.390065 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldg5\" (UniqueName: \"kubernetes.io/projected/5d8c92ab-128c-41fa-8ae1-25b2c0776232-kube-api-access-cldg5\") pod \"ironic-operator-controller-manager-6c548fd776-xtdv5\" (UID: \"5d8c92ab-128c-41fa-8ae1-25b2c0776232\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.390139 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.391370 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.393034 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wch6f\" (UniqueName: \"kubernetes.io/projected/2793e6a5-22f6-4562-8253-c7c6993728fc-kube-api-access-wch6f\") pod \"keystone-operator-controller-manager-546d4bdf48-s9v88\" (UID: \"2793e6a5-22f6-4562-8253-c7c6993728fc\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.399627 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wb78g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.407388 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ckht\" (UniqueName: \"kubernetes.io/projected/57cd6967-e631-48d7-bbd4-856ac77f592b-kube-api-access-5ckht\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.427013 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.443498 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-69cvg"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.444312 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9fk\" (UniqueName: \"kubernetes.io/projected/0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d-kube-api-access-zd9fk\") pod \"manila-operator-controller-manager-6546668bfd-zr4sc\" (UID: \"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.444399 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhjx\" (UniqueName: \"kubernetes.io/projected/a764d93d-518d-46ef-b135-eae7f3b02985-kube-api-access-wzhjx\") pod \"nova-operator-controller-manager-697bc559fc-2hkcb\" (UID: \"a764d93d-518d-46ef-b135-eae7f3b02985\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.444439 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtngw\" (UniqueName: \"kubernetes.io/projected/cd5cbb55-3997-45b7-9452-63f8354cf069-kube-api-access-gtngw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jsbsw\" (UID: \"cd5cbb55-3997-45b7-9452-63f8354cf069\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.444555 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fpt\" (UniqueName: \"kubernetes.io/projected/7264f040-a8ce-49f1-8422-0b5d03b79531-kube-api-access-87fpt\") pod \"mariadb-operator-controller-manager-56bbcc9d85-c98mh\" (UID: \"7264f040-a8ce-49f1-8422-0b5d03b79531\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.445135 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.455057 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sqlxx" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.470351 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.470717 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.475051 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtngw\" (UniqueName: \"kubernetes.io/projected/cd5cbb55-3997-45b7-9452-63f8354cf069-kube-api-access-gtngw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jsbsw\" (UID: \"cd5cbb55-3997-45b7-9452-63f8354cf069\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.478367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9fk\" (UniqueName: \"kubernetes.io/projected/0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d-kube-api-access-zd9fk\") pod \"manila-operator-controller-manager-6546668bfd-zr4sc\" (UID: \"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.478403 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-69cvg"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.479172 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fpt\" (UniqueName: \"kubernetes.io/projected/7264f040-a8ce-49f1-8422-0b5d03b79531-kube-api-access-87fpt\") pod \"mariadb-operator-controller-manager-56bbcc9d85-c98mh\" (UID: \"7264f040-a8ce-49f1-8422-0b5d03b79531\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.496690 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.499058 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.527749 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tvzgp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.533211 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.539837 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.540476 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.541439 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.546019 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhjx\" (UniqueName: \"kubernetes.io/projected/a764d93d-518d-46ef-b135-eae7f3b02985-kube-api-access-wzhjx\") pod \"nova-operator-controller-manager-697bc559fc-2hkcb\" (UID: \"a764d93d-518d-46ef-b135-eae7f3b02985\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.557169 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9l4m\" (UniqueName: \"kubernetes.io/projected/ef13c53a-b7d2-46e7-aabc-37091112d6c6-kube-api-access-h9l4m\") pod \"ovn-operator-controller-manager-b6456fdb6-s887g\" (UID: \"ef13c53a-b7d2-46e7-aabc-37091112d6c6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.557665 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgh7\" (UniqueName: \"kubernetes.io/projected/8912a20d-9515-4c18-8e19-009876be37d9-kube-api-access-9qgh7\") pod \"octavia-operator-controller-manager-998648c74-69cvg\" (UID: \"8912a20d-9515-4c18-8e19-009876be37d9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.546442 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.556932 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.546602 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-njzfv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.604587 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhjx\" (UniqueName: \"kubernetes.io/projected/a764d93d-518d-46ef-b135-eae7f3b02985-kube-api-access-wzhjx\") pod \"nova-operator-controller-manager-697bc559fc-2hkcb\" (UID: \"a764d93d-518d-46ef-b135-eae7f3b02985\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.637657 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.641062 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.650282 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-rf5c4" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.666463 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.666549 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtljr\" (UniqueName: \"kubernetes.io/projected/b680fae3-b615-465f-bea9-d61a847a6038-kube-api-access-jtljr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.666609 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9l4m\" (UniqueName: \"kubernetes.io/projected/ef13c53a-b7d2-46e7-aabc-37091112d6c6-kube-api-access-h9l4m\") pod \"ovn-operator-controller-manager-b6456fdb6-s887g\" (UID: \"ef13c53a-b7d2-46e7-aabc-37091112d6c6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.666646 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgh7\" (UniqueName: \"kubernetes.io/projected/8912a20d-9515-4c18-8e19-009876be37d9-kube-api-access-9qgh7\") pod \"octavia-operator-controller-manager-998648c74-69cvg\" (UID: \"8912a20d-9515-4c18-8e19-009876be37d9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.718383 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.725106 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.766331 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.768781 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.769132 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.769213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtljr\" (UniqueName: \"kubernetes.io/projected/b680fae3-b615-465f-bea9-d61a847a6038-kube-api-access-jtljr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.769374 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrgc\" (UniqueName: \"kubernetes.io/projected/7911a66c-1116-4db9-9343-548d40f54e90-kube-api-access-vqrgc\") pod \"placement-operator-controller-manager-78f8948974-r6mpw\" (UID: \"7911a66c-1116-4db9-9343-548d40f54e90\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.769553 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.769611 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert podName:b680fae3-b615-465f-bea9-d61a847a6038 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:21.269592295 +0000 UTC m=+1100.891668353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" (UID: "b680fae3-b615-465f-bea9-d61a847a6038") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.774905 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.784538 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.785852 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gn6ff" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.793092 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9l4m\" (UniqueName: \"kubernetes.io/projected/ef13c53a-b7d2-46e7-aabc-37091112d6c6-kube-api-access-h9l4m\") pod \"ovn-operator-controller-manager-b6456fdb6-s887g\" (UID: \"ef13c53a-b7d2-46e7-aabc-37091112d6c6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.793167 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.797257 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgh7\" (UniqueName: \"kubernetes.io/projected/8912a20d-9515-4c18-8e19-009876be37d9-kube-api-access-9qgh7\") pod \"octavia-operator-controller-manager-998648c74-69cvg\" (UID: \"8912a20d-9515-4c18-8e19-009876be37d9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.807106 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtljr\" (UniqueName: \"kubernetes.io/projected/b680fae3-b615-465f-bea9-d61a847a6038-kube-api-access-jtljr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.823787 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.826850 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.829144 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.838220 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mthpv" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.851084 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.866522 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.868536 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.872002 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6km\" (UniqueName: \"kubernetes.io/projected/a4f6c7bc-09b0-4dda-bd88-76ee93e0a907-kube-api-access-qq6km\") pod \"swift-operator-controller-manager-5f8c65bbfc-xg8sj\" (UID: \"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.872090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.872121 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrgc\" (UniqueName: \"kubernetes.io/projected/7911a66c-1116-4db9-9343-548d40f54e90-kube-api-access-vqrgc\") pod \"placement-operator-controller-manager-78f8948974-r6mpw\" (UID: \"7911a66c-1116-4db9-9343-548d40f54e90\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.872158 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9g88\" (UniqueName: \"kubernetes.io/projected/741effc8-8c8a-420e-b6c0-0b62ebc9bdbf-kube-api-access-q9g88\") pod \"telemetry-operator-controller-manager-76cc84c6bb-pkfzx\" (UID: \"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.872433 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: E1129 07:19:20.872504 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert podName:57cd6967-e631-48d7-bbd4-856ac77f592b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:21.872466226 +0000 UTC m=+1101.494542284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert") pod "infra-operator-controller-manager-57548d458d-ntfvp" (UID: "57cd6967-e631-48d7-bbd4-856ac77f592b") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.872818 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.891464 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cw6z7" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.892859 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.912672 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.931507 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.933047 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.936479 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gfs2r" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.938065 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.941856 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrgc\" (UniqueName: \"kubernetes.io/projected/7911a66c-1116-4db9-9343-548d40f54e90-kube-api-access-vqrgc\") pod \"placement-operator-controller-manager-78f8948974-r6mpw\" (UID: \"7911a66c-1116-4db9-9343-548d40f54e90\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.972977 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.974418 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.977061 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6"] Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.978044 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6km\" (UniqueName: \"kubernetes.io/projected/a4f6c7bc-09b0-4dda-bd88-76ee93e0a907-kube-api-access-qq6km\") pod \"swift-operator-controller-manager-5f8c65bbfc-xg8sj\" (UID: \"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.978154 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9g88\" (UniqueName: \"kubernetes.io/projected/741effc8-8c8a-420e-b6c0-0b62ebc9bdbf-kube-api-access-q9g88\") pod \"telemetry-operator-controller-manager-76cc84c6bb-pkfzx\" (UID: \"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.978210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zck\" (UniqueName: \"kubernetes.io/projected/7c7879c2-7253-4728-96b9-44c431d99fd4-kube-api-access-w4zck\") pod \"test-operator-controller-manager-5854674fcc-8rrwm\" (UID: \"7c7879c2-7253-4728-96b9-44c431d99fd4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.978303 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2fw\" (UniqueName: \"kubernetes.io/projected/98dc3704-84a8-46b5-aa13-f9de4ebde0a7-kube-api-access-4z2fw\") pod \"watcher-operator-controller-manager-769dc69bc-kzq8x\" (UID: \"98dc3704-84a8-46b5-aa13-f9de4ebde0a7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.979520 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.979573 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k9xv9" Nov 29 07:19:20 crc kubenswrapper[4828]: I1129 07:19:20.979760 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.004547 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9g88\" (UniqueName: \"kubernetes.io/projected/741effc8-8c8a-420e-b6c0-0b62ebc9bdbf-kube-api-access-q9g88\") pod \"telemetry-operator-controller-manager-76cc84c6bb-pkfzx\" (UID: \"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.010894 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6km\" (UniqueName: \"kubernetes.io/projected/a4f6c7bc-09b0-4dda-bd88-76ee93e0a907-kube-api-access-qq6km\") pod \"swift-operator-controller-manager-5f8c65bbfc-xg8sj\" (UID: \"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.024178 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.027714 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.032101 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d54w4" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.032610 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.033699 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.079919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zck\" (UniqueName: \"kubernetes.io/projected/7c7879c2-7253-4728-96b9-44c431d99fd4-kube-api-access-w4zck\") pod \"test-operator-controller-manager-5854674fcc-8rrwm\" (UID: \"7c7879c2-7253-4728-96b9-44c431d99fd4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.079990 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9hw\" (UniqueName: \"kubernetes.io/projected/7c2f01b9-cbfb-4781-bd51-2ab29504eafa-kube-api-access-pt9hw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjtrb\" (UID: \"7c2f01b9-cbfb-4781-bd51-2ab29504eafa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.080027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkq7c\" (UniqueName: \"kubernetes.io/projected/5b74289e-ed4b-4af7-b250-7b660b9c9102-kube-api-access-qkq7c\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.080468 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2fw\" (UniqueName: \"kubernetes.io/projected/98dc3704-84a8-46b5-aa13-f9de4ebde0a7-kube-api-access-4z2fw\") pod \"watcher-operator-controller-manager-769dc69bc-kzq8x\" (UID: \"98dc3704-84a8-46b5-aa13-f9de4ebde0a7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.080519 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.080542 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.102726 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.105938 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zck\" (UniqueName: \"kubernetes.io/projected/7c7879c2-7253-4728-96b9-44c431d99fd4-kube-api-access-w4zck\") pod \"test-operator-controller-manager-5854674fcc-8rrwm\" (UID: \"7c7879c2-7253-4728-96b9-44c431d99fd4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.112061 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2fw\" (UniqueName: \"kubernetes.io/projected/98dc3704-84a8-46b5-aa13-f9de4ebde0a7-kube-api-access-4z2fw\") pod \"watcher-operator-controller-manager-769dc69bc-kzq8x\" (UID: \"98dc3704-84a8-46b5-aa13-f9de4ebde0a7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.127706 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.167473 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.182419 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt9hw\" (UniqueName: \"kubernetes.io/projected/7c2f01b9-cbfb-4781-bd51-2ab29504eafa-kube-api-access-pt9hw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjtrb\" (UID: \"7c2f01b9-cbfb-4781-bd51-2ab29504eafa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.182505 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkq7c\" (UniqueName: \"kubernetes.io/projected/5b74289e-ed4b-4af7-b250-7b660b9c9102-kube-api-access-qkq7c\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.182580 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.182612 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.182796 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.182859 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:21.682838627 +0000 UTC m=+1101.304914685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "metrics-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.183500 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.183591 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:21.683566496 +0000 UTC m=+1101.305642634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.213074 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt9hw\" (UniqueName: \"kubernetes.io/projected/7c2f01b9-cbfb-4781-bd51-2ab29504eafa-kube-api-access-pt9hw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjtrb\" (UID: \"7c2f01b9-cbfb-4781-bd51-2ab29504eafa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.214697 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.225359 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkq7c\" (UniqueName: \"kubernetes.io/projected/5b74289e-ed4b-4af7-b250-7b660b9c9102-kube-api-access-qkq7c\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.292176 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.292412 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.292473 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert podName:b680fae3-b615-465f-bea9-d61a847a6038 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:22.292453814 +0000 UTC m=+1101.914529872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" (UID: "b680fae3-b615-465f-bea9-d61a847a6038") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.294631 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.335790 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.354828 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.415934 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.478083 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" event={"ID":"1048c045-97cc-4506-a0ad-48a8f47366e5","Type":"ContainerStarted","Data":"d32f0f85b669782a590a0923afda9abb32e489eed49c67beaf3fe2cc9ace6f12"} Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.481370 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" event={"ID":"29d5d952-52dc-4a17-8f00-fa65fda896d0","Type":"ContainerStarted","Data":"255cfba605649fbc2b243028ad1149d0a307a428cd1c6ee0da6caf9c2b9ac6cb"} Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.706782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.707122 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.707320 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.707402 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:22.707366 +0000 UTC m=+1102.329442058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "metrics-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.707813 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.707850 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:22.707837702 +0000 UTC m=+1102.329913770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: W1129 07:19:21.724863 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54ef84a_2f7d_47be_a9fd_699a627b3d91.slice/crio-437b8bbed0cb5e0881d2fa8057816f0a142617da3cf5c8862e88131fe909c32a WatchSource:0}: Error finding container 437b8bbed0cb5e0881d2fa8057816f0a142617da3cf5c8862e88131fe909c32a: Status 404 returned error can't find the container with id 437b8bbed0cb5e0881d2fa8057816f0a142617da3cf5c8862e88131fe909c32a Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.727619 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw"] Nov 29 07:19:21 crc kubenswrapper[4828]: W1129 07:19:21.731135 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741b8a30_2d40_4d2d_b2ee_3ed44cc95ff7.slice/crio-575ed6a5f56a439dd1a8190a7c64302885d16b4761b8e9112a0d1330c9eff009 WatchSource:0}: Error finding container 575ed6a5f56a439dd1a8190a7c64302885d16b4761b8e9112a0d1330c9eff009: Status 404 returned error can't find the container with id 575ed6a5f56a439dd1a8190a7c64302885d16b4761b8e9112a0d1330c9eff009 Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.739592 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq"] Nov 29 07:19:21 crc kubenswrapper[4828]: W1129 07:19:21.740320 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8152b24c_fd27_443d_a35e_1ca6e4a5cf3e.slice/crio-d471c46134f7c2f5ec2ab2d975a6933d1174e26495465372f8d71fb56404f53a WatchSource:0}: Error finding container d471c46134f7c2f5ec2ab2d975a6933d1174e26495465372f8d71fb56404f53a: Status 404 returned error can't find the container with id d471c46134f7c2f5ec2ab2d975a6933d1174e26495465372f8d71fb56404f53a Nov 29 07:19:21 crc kubenswrapper[4828]: W1129 07:19:21.743958 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d8c92ab_128c_41fa_8ae1_25b2c0776232.slice/crio-7a1bd22a82a7c7d994179a171103ddc8067878c8e8914469f1bdcc4693d9dcfe WatchSource:0}: Error finding container 7a1bd22a82a7c7d994179a171103ddc8067878c8e8914469f1bdcc4693d9dcfe: Status 404 returned error can't find the container with id 7a1bd22a82a7c7d994179a171103ddc8067878c8e8914469f1bdcc4693d9dcfe Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.750631 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.761197 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5"] Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.914185 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.914444 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: E1129 07:19:21.914513 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert podName:57cd6967-e631-48d7-bbd4-856ac77f592b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:23.914496059 +0000 UTC m=+1103.536572117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert") pod "infra-operator-controller-manager-57548d458d-ntfvp" (UID: "57cd6967-e631-48d7-bbd4-856ac77f592b") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:21 crc kubenswrapper[4828]: I1129 07:19:21.914590 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv"] Nov 29 07:19:21 crc kubenswrapper[4828]: W1129 07:19:21.926053 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf53d1403_e6c3_4696_bc32_7b711c38083e.slice/crio-63fcf15d835f6b6dec9df8dec9c365587ebe28ffeb7560092699b290a3b6f55b WatchSource:0}: Error finding container 63fcf15d835f6b6dec9df8dec9c365587ebe28ffeb7560092699b290a3b6f55b: Status 404 returned error can't find the container with id 63fcf15d835f6b6dec9df8dec9c365587ebe28ffeb7560092699b290a3b6f55b Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.138148 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.162023 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.179251 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.191006 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.217333 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.223706 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb"] Nov 29 07:19:22 crc kubenswrapper[4828]: W1129 07:19:22.234119 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda764d93d_518d_46ef_b135_eae7f3b02985.slice/crio-09add89567e846b0b360efa7c5806d1b896122c8a8e6e31ec2ba7054195a2d38 WatchSource:0}: Error finding container 09add89567e846b0b360efa7c5806d1b896122c8a8e6e31ec2ba7054195a2d38: Status 404 returned error can't find the container with id 09add89567e846b0b360efa7c5806d1b896122c8a8e6e31ec2ba7054195a2d38 Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.238648 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-69cvg"] Nov 29 07:19:22 crc kubenswrapper[4828]: W1129 07:19:22.239868 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7264f040_a8ce_49f1_8422_0b5d03b79531.slice/crio-82478f467ccf949c655135fe179db769a3c12c6b1ca06953121e35c631bcdbe1 WatchSource:0}: Error finding container 82478f467ccf949c655135fe179db769a3c12c6b1ca06953121e35c631bcdbe1: Status 404 returned error can't find the container with id 82478f467ccf949c655135fe179db769a3c12c6b1ca06953121e35c631bcdbe1 Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.248620 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.267566 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g"] Nov 29 07:19:22 crc kubenswrapper[4828]: W1129 07:19:22.288750 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2793e6a5_22f6_4562_8253_c7c6993728fc.slice/crio-b855189bdec5d1217b6f34d46d2ab04beca42bfe64d2d76ebc89619f4578e19b WatchSource:0}: Error finding container b855189bdec5d1217b6f34d46d2ab04beca42bfe64d2d76ebc89619f4578e19b: Status 404 returned error can't find the container with id b855189bdec5d1217b6f34d46d2ab04beca42bfe64d2d76ebc89619f4578e19b Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.291461 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw"] Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.295953 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9l4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-s887g_openstack-operators(ef13c53a-b7d2-46e7-aabc-37091112d6c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.319509 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wch6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-s9v88_openstack-operators(2793e6a5-22f6-4562-8253-c7c6993728fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.320193 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gtngw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jsbsw_openstack-operators(cd5cbb55-3997-45b7-9452-63f8354cf069): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.321638 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9l4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-s887g_openstack-operators(ef13c53a-b7d2-46e7-aabc-37091112d6c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.322878 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" podUID="ef13c53a-b7d2-46e7-aabc-37091112d6c6" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.323992 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wch6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-s9v88_openstack-operators(2793e6a5-22f6-4562-8253-c7c6993728fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.324132 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gtngw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jsbsw_openstack-operators(cd5cbb55-3997-45b7-9452-63f8354cf069): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.326123 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" podUID="cd5cbb55-3997-45b7-9452-63f8354cf069" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.326156 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" podUID="2793e6a5-22f6-4562-8253-c7c6993728fc" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.338224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.338398 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.338462 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert podName:b680fae3-b615-465f-bea9-d61a847a6038 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:24.338447529 +0000 UTC m=+1103.960523587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" (UID: "b680fae3-b615-465f-bea9-d61a847a6038") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.435024 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.441445 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb"] Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.452741 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x"] Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.454767 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4zck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-8rrwm_openstack-operators(7c7879c2-7253-4728-96b9-44c431d99fd4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.457649 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z2fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-kzq8x_openstack-operators(98dc3704-84a8-46b5-aa13-f9de4ebde0a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.457936 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4zck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-8rrwm_openstack-operators(7c7879c2-7253-4728-96b9-44c431d99fd4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.459242 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" podUID="7c7879c2-7253-4728-96b9-44c431d99fd4" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.460265 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z2fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-kzq8x_openstack-operators(98dc3704-84a8-46b5-aa13-f9de4ebde0a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.462775 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" podUID="98dc3704-84a8-46b5-aa13-f9de4ebde0a7" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.490697 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" event={"ID":"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e","Type":"ContainerStarted","Data":"d471c46134f7c2f5ec2ab2d975a6933d1174e26495465372f8d71fb56404f53a"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.492329 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" event={"ID":"98dc3704-84a8-46b5-aa13-f9de4ebde0a7","Type":"ContainerStarted","Data":"b812037165e06d6fcd12e6a4f42e6620e59ff07cc9396bf6fdd08407d05af196"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.499706 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" event={"ID":"a764d93d-518d-46ef-b135-eae7f3b02985","Type":"ContainerStarted","Data":"09add89567e846b0b360efa7c5806d1b896122c8a8e6e31ec2ba7054195a2d38"} Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.499570 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" podUID="98dc3704-84a8-46b5-aa13-f9de4ebde0a7" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.523407 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" event={"ID":"ef13c53a-b7d2-46e7-aabc-37091112d6c6","Type":"ContainerStarted","Data":"4ae6a19f56ee97803fbf76b74e02b2499761648989dd0241af78c4a10182868b"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.528471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" event={"ID":"7911a66c-1116-4db9-9343-548d40f54e90","Type":"ContainerStarted","Data":"88db04f27405c84617fd9ab9c302eede58ef4aca26d03d873d7b02315dbff726"} Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.529747 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" podUID="ef13c53a-b7d2-46e7-aabc-37091112d6c6" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.547909 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" event={"ID":"a54ef84a-2f7d-47be-a9fd-699a627b3d91","Type":"ContainerStarted","Data":"437b8bbed0cb5e0881d2fa8057816f0a142617da3cf5c8862e88131fe909c32a"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.557419 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" event={"ID":"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d","Type":"ContainerStarted","Data":"d2df137eedd4f3a173085e548991eb0353d0836297c85e451cc9d02ce7ffb06e"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.567202 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" event={"ID":"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf","Type":"ContainerStarted","Data":"032681c7bddad2a6cb231a3c7ae6b0944e447231b790170bec1a70f04c73402d"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.569383 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" event={"ID":"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907","Type":"ContainerStarted","Data":"0dea7f09b538aaedc1ed14731ba817ab153de44dee09ddd51dd664bc40ad0570"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.572505 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" event={"ID":"cd5cbb55-3997-45b7-9452-63f8354cf069","Type":"ContainerStarted","Data":"3792cf98f2f796da6c655046ab07ecb8f37ef781845f9b08d4cc1f993a58b5f4"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.576761 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" event={"ID":"f53d1403-e6c3-4696-bc32-7b711c38083e","Type":"ContainerStarted","Data":"63fcf15d835f6b6dec9df8dec9c365587ebe28ffeb7560092699b290a3b6f55b"} Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.577204 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" podUID="cd5cbb55-3997-45b7-9452-63f8354cf069" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.597443 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" event={"ID":"7264f040-a8ce-49f1-8422-0b5d03b79531","Type":"ContainerStarted","Data":"82478f467ccf949c655135fe179db769a3c12c6b1ca06953121e35c631bcdbe1"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.600136 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" event={"ID":"8912a20d-9515-4c18-8e19-009876be37d9","Type":"ContainerStarted","Data":"7ab7ccd125932abf9bef52f8957b44bb8f8afc480af506c791d253552ef4e90e"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.605446 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" event={"ID":"5d8c92ab-128c-41fa-8ae1-25b2c0776232","Type":"ContainerStarted","Data":"7a1bd22a82a7c7d994179a171103ddc8067878c8e8914469f1bdcc4693d9dcfe"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.607645 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" event={"ID":"7c2f01b9-cbfb-4781-bd51-2ab29504eafa","Type":"ContainerStarted","Data":"0e1778cc8acb8e688c8ed31921d78f048de2a1d96a5bc3f9b2941152e2d0f718"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.609200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" event={"ID":"2793e6a5-22f6-4562-8253-c7c6993728fc","Type":"ContainerStarted","Data":"b855189bdec5d1217b6f34d46d2ab04beca42bfe64d2d76ebc89619f4578e19b"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.610169 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" event={"ID":"7c7879c2-7253-4728-96b9-44c431d99fd4","Type":"ContainerStarted","Data":"f46c7161aeab84ad331236bf7da623e258b94b50a74edfd26f62aea57f6b58b3"} Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.612082 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" event={"ID":"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7","Type":"ContainerStarted","Data":"575ed6a5f56a439dd1a8190a7c64302885d16b4761b8e9112a0d1330c9eff009"} Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.613537 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" podUID="2793e6a5-22f6-4562-8253-c7c6993728fc" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.617953 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" podUID="7c7879c2-7253-4728-96b9-44c431d99fd4" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.751638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:22 crc kubenswrapper[4828]: I1129 07:19:22.751683 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.752207 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.752306 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:24.752260425 +0000 UTC m=+1104.374336483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "webhook-server-cert" not found Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.752335 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:19:22 crc kubenswrapper[4828]: E1129 07:19:22.752406 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:24.752387649 +0000 UTC m=+1104.374463707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "metrics-server-cert" not found Nov 29 07:19:23 crc kubenswrapper[4828]: E1129 07:19:23.659031 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" podUID="ef13c53a-b7d2-46e7-aabc-37091112d6c6" Nov 29 07:19:23 crc kubenswrapper[4828]: E1129 07:19:23.659199 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" podUID="cd5cbb55-3997-45b7-9452-63f8354cf069" Nov 29 07:19:23 crc kubenswrapper[4828]: E1129 07:19:23.659557 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" podUID="2793e6a5-22f6-4562-8253-c7c6993728fc" Nov 29 07:19:23 crc kubenswrapper[4828]: E1129 07:19:23.659870 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" podUID="98dc3704-84a8-46b5-aa13-f9de4ebde0a7" Nov 29 07:19:23 crc kubenswrapper[4828]: E1129 07:19:23.660497 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" podUID="7c7879c2-7253-4728-96b9-44c431d99fd4" Nov 29 07:19:24 crc kubenswrapper[4828]: I1129 07:19:24.004818 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.005057 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.005122 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert podName:57cd6967-e631-48d7-bbd4-856ac77f592b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:28.005101513 +0000 UTC m=+1107.627177571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert") pod "infra-operator-controller-manager-57548d458d-ntfvp" (UID: "57cd6967-e631-48d7-bbd4-856ac77f592b") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: I1129 07:19:24.420909 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.421208 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.421295 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert podName:b680fae3-b615-465f-bea9-d61a847a6038 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:28.421258891 +0000 UTC m=+1108.043334949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" (UID: "b680fae3-b615-465f-bea9-d61a847a6038") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: I1129 07:19:24.827163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:24 crc kubenswrapper[4828]: I1129 07:19:24.827532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.827479 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.827795 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:28.82777563 +0000 UTC m=+1108.449851688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "webhook-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.827719 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:19:24 crc kubenswrapper[4828]: E1129 07:19:24.828119 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:28.828108378 +0000 UTC m=+1108.450184436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "metrics-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: I1129 07:19:28.035664 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.035898 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.036034 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert podName:57cd6967-e631-48d7-bbd4-856ac77f592b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:36.036007842 +0000 UTC m=+1115.658083960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert") pod "infra-operator-controller-manager-57548d458d-ntfvp" (UID: "57cd6967-e631-48d7-bbd4-856ac77f592b") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: I1129 07:19:28.440781 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.440958 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.441034 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert podName:b680fae3-b615-465f-bea9-d61a847a6038 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:36.441012212 +0000 UTC m=+1116.063088270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" (UID: "b680fae3-b615-465f-bea9-d61a847a6038") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: I1129 07:19:28.846218 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:28 crc kubenswrapper[4828]: I1129 07:19:28.846301 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.846454 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.846515 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:36.846496654 +0000 UTC m=+1116.468572722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "metrics-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.846903 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:19:28 crc kubenswrapper[4828]: E1129 07:19:28.846941 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs podName:5b74289e-ed4b-4af7-b250-7b660b9c9102 nodeName:}" failed. No retries permitted until 2025-11-29 07:19:36.846931355 +0000 UTC m=+1116.469007413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs") pod "openstack-operator-controller-manager-7769b678c8-gjkl6" (UID: "5b74289e-ed4b-4af7-b250-7b660b9c9102") : secret "webhook-server-cert" not found Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.102606 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.108244 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57cd6967-e631-48d7-bbd4-856ac77f592b-cert\") pod \"infra-operator-controller-manager-57548d458d-ntfvp\" (UID: \"57cd6967-e631-48d7-bbd4-856ac77f592b\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.349451 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.508534 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.528440 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b680fae3-b615-465f-bea9-d61a847a6038-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b\" (UID: \"b680fae3-b615-465f-bea9-d61a847a6038\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.552875 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:19:36 crc kubenswrapper[4828]: E1129 07:19:36.757865 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Nov 29 07:19:36 crc kubenswrapper[4828]: E1129 07:19:36.758365 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qgh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-69cvg_openstack-operators(8912a20d-9515-4c18-8e19-009876be37d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.918600 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.918641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.923111 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-webhook-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.938740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b74289e-ed4b-4af7-b250-7b660b9c9102-metrics-certs\") pod \"openstack-operator-controller-manager-7769b678c8-gjkl6\" (UID: \"5b74289e-ed4b-4af7-b250-7b660b9c9102\") " pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:36 crc kubenswrapper[4828]: I1129 07:19:36.972291 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:19:38 crc kubenswrapper[4828]: E1129 07:19:38.563226 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Nov 29 07:19:38 crc kubenswrapper[4828]: E1129 07:19:38.563725 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-pkfzx_openstack-operators(741effc8-8c8a-420e-b6c0-0b62ebc9bdbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:19:38 crc kubenswrapper[4828]: E1129 07:19:38.653613 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.233:5001/openstack-k8s-operators/heat-operator:66343cba8aee9c653e4832d175d84f81ee575bb1" Nov 29 07:19:38 crc kubenswrapper[4828]: E1129 07:19:38.653813 4828 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.233:5001/openstack-k8s-operators/heat-operator:66343cba8aee9c653e4832d175d84f81ee575bb1" Nov 29 07:19:38 crc kubenswrapper[4828]: E1129 07:19:38.653962 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.233:5001/openstack-k8s-operators/heat-operator:66343cba8aee9c653e4832d175d84f81ee575bb1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cvcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-f569bc5bd-7n76r_openstack-operators(8152b24c-fd27-443d-a35e-1ca6e4a5cf3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.218341 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.218986 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt9hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjtrb_openstack-operators(7c2f01b9-cbfb-4781-bd51-2ab29504eafa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.220188 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" podUID="7c2f01b9-cbfb-4781-bd51-2ab29504eafa" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.749925 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" podUID="7c2f01b9-cbfb-4781-bd51-2ab29504eafa" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.959174 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 29 07:19:39 crc kubenswrapper[4828]: E1129 07:19:39.959378 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzhjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-2hkcb_openstack-operators(a764d93d-518d-46ef-b135-eae7f3b02985): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:19:44 crc kubenswrapper[4828]: I1129 07:19:44.827873 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp"] Nov 29 07:19:45 crc kubenswrapper[4828]: I1129 07:19:45.297363 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6"] Nov 29 07:19:56 crc kubenswrapper[4828]: I1129 07:19:56.922416 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" event={"ID":"57cd6967-e631-48d7-bbd4-856ac77f592b","Type":"ContainerStarted","Data":"d028c1273f5408595121efba067485475ea6cf80e598ead1dbbbfb07a150e435"} Nov 29 07:19:59 crc kubenswrapper[4828]: W1129 07:19:59.880354 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b74289e_ed4b_4af7_b250_7b660b9c9102.slice/crio-53baf464296e037a4fdb71e22a345546768c198610c20052938e0857f9da8593 WatchSource:0}: Error finding container 53baf464296e037a4fdb71e22a345546768c198610c20052938e0857f9da8593: Status 404 returned error can't find the container with id 53baf464296e037a4fdb71e22a345546768c198610c20052938e0857f9da8593 Nov 29 07:19:59 crc kubenswrapper[4828]: E1129 07:19:59.969894 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:19:59 crc kubenswrapper[4828]: E1129 07:19:59.970608 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cvcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-f569bc5bd-7n76r_openstack-operators(8152b24c-fd27-443d-a35e-1ca6e4a5cf3e): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:19:59 crc kubenswrapper[4828]: E1129 07:19:59.972465 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" podUID="8152b24c-fd27-443d-a35e-1ca6e4a5cf3e" Nov 29 07:19:59 crc kubenswrapper[4828]: I1129 07:19:59.973793 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" event={"ID":"5b74289e-ed4b-4af7-b250-7b660b9c9102","Type":"ContainerStarted","Data":"53baf464296e037a4fdb71e22a345546768c198610c20052938e0857f9da8593"} Nov 29 07:20:00 crc kubenswrapper[4828]: I1129 07:20:00.397026 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b"] Nov 29 07:20:00 crc kubenswrapper[4828]: W1129 07:20:00.621169 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb680fae3_b615_465f_bea9_d61a847a6038.slice/crio-ded71be4b2ac92b8f17e0ba9a9e7de5b9f7acfc14940bcb0ee18d89a3d2f47d0 WatchSource:0}: Error finding container ded71be4b2ac92b8f17e0ba9a9e7de5b9f7acfc14940bcb0ee18d89a3d2f47d0: Status 404 returned error can't find the container with id ded71be4b2ac92b8f17e0ba9a9e7de5b9f7acfc14940bcb0ee18d89a3d2f47d0 Nov 29 07:20:00 crc kubenswrapper[4828]: E1129 07:20:00.700947 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:20:00 crc kubenswrapper[4828]: E1129 07:20:00.701196 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzhjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-2hkcb_openstack-operators(a764d93d-518d-46ef-b135-eae7f3b02985): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:20:00 crc kubenswrapper[4828]: E1129 07:20:00.708386 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" podUID="a764d93d-518d-46ef-b135-eae7f3b02985" Nov 29 07:20:01 crc kubenswrapper[4828]: I1129 07:20:01.003589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" event={"ID":"b680fae3-b615-465f-bea9-d61a847a6038","Type":"ContainerStarted","Data":"ded71be4b2ac92b8f17e0ba9a9e7de5b9f7acfc14940bcb0ee18d89a3d2f47d0"} Nov 29 07:20:01 crc kubenswrapper[4828]: I1129 07:20:01.012629 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" event={"ID":"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7","Type":"ContainerStarted","Data":"a2825d0a9c563c5429a649900106a4e3d980a70b7d7021b767b8b3392c38924b"} Nov 29 07:20:01 crc kubenswrapper[4828]: I1129 07:20:01.019670 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" event={"ID":"5d8c92ab-128c-41fa-8ae1-25b2c0776232","Type":"ContainerStarted","Data":"01f481422ace329e32d5f9ac7f8fb1b41f1d3374a901afc06efd7aa271d566d7"} Nov 29 07:20:01 crc kubenswrapper[4828]: I1129 07:20:01.030120 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" event={"ID":"7911a66c-1116-4db9-9343-548d40f54e90","Type":"ContainerStarted","Data":"5e177280bdf4e7dcb37146c5c377548a0af97bbec3d893e7b780e4a09796e633"} Nov 29 07:20:01 crc kubenswrapper[4828]: I1129 07:20:01.031818 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" event={"ID":"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907","Type":"ContainerStarted","Data":"f2c205308550ff9f266cc472ade5fca7434f264a90f10094be8be2bfb3f736cb"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.040045 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" event={"ID":"a54ef84a-2f7d-47be-a9fd-699a627b3d91","Type":"ContainerStarted","Data":"7d8a034302166c2ee9c5ac8fe65b482911aeaac6da39a38a6b9161c118f722f1"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.042433 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" event={"ID":"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d","Type":"ContainerStarted","Data":"c6bf5a04f8934e22df1c1623adff6188942ceaf2bc5441b8f022ab92625e5d7c"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.044110 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" event={"ID":"5b74289e-ed4b-4af7-b250-7b660b9c9102","Type":"ContainerStarted","Data":"9c899f9d872aff24ead54da2e66c8299ba1f305f619bd466c69aa8485a06e335"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.044261 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.045900 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" event={"ID":"1048c045-97cc-4506-a0ad-48a8f47366e5","Type":"ContainerStarted","Data":"b79dc20d968245c7b463f531776e7b586523d6ecc5a9f293985506a2dfc0b070"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.048588 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" event={"ID":"f53d1403-e6c3-4696-bc32-7b711c38083e","Type":"ContainerStarted","Data":"206811e76aa5f6d2cb71c7a108ffe58e366f84411f35a22eb21284b71ed9c7b1"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.049955 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" event={"ID":"29d5d952-52dc-4a17-8f00-fa65fda896d0","Type":"ContainerStarted","Data":"faffffb0938c707e6694b81b69837567412de392eaad370b1f2d65e0b5acdc67"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.051602 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" event={"ID":"7264f040-a8ce-49f1-8422-0b5d03b79531","Type":"ContainerStarted","Data":"cfc83bf00b6c649b59e8e6cd033ea5bec0034f8fe1d756849b356b84103f984f"} Nov 29 07:20:02 crc kubenswrapper[4828]: I1129 07:20:02.093538 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" podStartSLOduration=42.093477634 podStartE2EDuration="42.093477634s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:20:02.086196725 +0000 UTC m=+1141.708272783" watchObservedRunningTime="2025-11-29 07:20:02.093477634 +0000 UTC m=+1141.715553692" Nov 29 07:20:03 crc kubenswrapper[4828]: I1129 07:20:03.071069 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" event={"ID":"7c7879c2-7253-4728-96b9-44c431d99fd4","Type":"ContainerStarted","Data":"c2634f56fae798a5cd4edebc308b246368507ed7080cc1ceefb4969b51e4b163"} Nov 29 07:20:04 crc kubenswrapper[4828]: E1129 07:20:04.060790 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:20:04 crc kubenswrapper[4828]: E1129 07:20:04.061793 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qgh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-69cvg_openstack-operators(8912a20d-9515-4c18-8e19-009876be37d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:20:04 crc kubenswrapper[4828]: E1129 07:20:04.063008 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" podUID="8912a20d-9515-4c18-8e19-009876be37d9" Nov 29 07:20:04 crc kubenswrapper[4828]: I1129 07:20:04.082705 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" event={"ID":"98dc3704-84a8-46b5-aa13-f9de4ebde0a7","Type":"ContainerStarted","Data":"731f789a87e04fa9aa3b24412849ea930dd359663f8725473d771f28478d1ecb"} Nov 29 07:20:05 crc kubenswrapper[4828]: I1129 07:20:05.098106 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" event={"ID":"cd5cbb55-3997-45b7-9452-63f8354cf069","Type":"ContainerStarted","Data":"ad1a0a059471317a4abbc981938b10745f08b80a5e8131b4db54bbfea1c85d2c"} Nov 29 07:20:05 crc kubenswrapper[4828]: I1129 07:20:05.100155 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" event={"ID":"2793e6a5-22f6-4562-8253-c7c6993728fc","Type":"ContainerStarted","Data":"f01b1fee77d86d06c2cbf764121152ee1ec682a3c389b63823c942ac57b207e7"} Nov 29 07:20:05 crc kubenswrapper[4828]: I1129 07:20:05.102095 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" event={"ID":"ef13c53a-b7d2-46e7-aabc-37091112d6c6","Type":"ContainerStarted","Data":"abd404592d46ebb8dc919acb7dbe33f833caeae842576286bf41d0c280649297"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.150717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" event={"ID":"741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7","Type":"ContainerStarted","Data":"d3088b4bd4300d5be94b3e725591e3967527a9f4f2e27228c1ecefaeca04e8cc"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.152147 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.153306 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" event={"ID":"7c2f01b9-cbfb-4781-bd51-2ab29504eafa","Type":"ContainerStarted","Data":"5ca9c54f725c59edd77d884505cd5474a49359269cf5165378b935d5afcf60e2"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.155742 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" Nov 29 07:20:06 crc kubenswrapper[4828]: E1129 07:20:06.185413 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" podUID="741effc8-8c8a-420e-b6c0-0b62ebc9bdbf" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.217515 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qpxtq" podStartSLOduration=2.659281909 podStartE2EDuration="46.217454361s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.737734385 +0000 UTC m=+1101.359810443" lastFinishedPulling="2025-11-29 07:20:05.295906837 +0000 UTC m=+1144.917982895" observedRunningTime="2025-11-29 07:20:06.191659294 +0000 UTC m=+1145.813735362" watchObservedRunningTime="2025-11-29 07:20:06.217454361 +0000 UTC m=+1145.839530429" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.238003 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" event={"ID":"7911a66c-1116-4db9-9343-548d40f54e90","Type":"ContainerStarted","Data":"8960a568accc206044ffcd2dba183f4d64e5f24e59c927b42701d71d74098f63"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.241507 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.248077 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.276468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" event={"ID":"cd5cbb55-3997-45b7-9452-63f8354cf069","Type":"ContainerStarted","Data":"8f2adfc1b6b980660b2f421d58700b3d3001be7e742f2be3793ab6e3499eaad0"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.276721 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.278455 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" event={"ID":"7c7879c2-7253-4728-96b9-44c431d99fd4","Type":"ContainerStarted","Data":"357e347d7c69c5d4fb417c94ec6974e29a52cc6da613624ffdaafc0c8f586697"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.278622 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.295757 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" event={"ID":"b680fae3-b615-465f-bea9-d61a847a6038","Type":"ContainerStarted","Data":"f3cabab47feb2db196b3882c838e34171e19a8f634f22716abee33689cf8018a"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.309514 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjtrb" podStartSLOduration=8.093809367 podStartE2EDuration="46.309495213s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.444912963 +0000 UTC m=+1102.066989011" lastFinishedPulling="2025-11-29 07:20:00.660598799 +0000 UTC m=+1140.282674857" observedRunningTime="2025-11-29 07:20:06.305389047 +0000 UTC m=+1145.927465105" watchObservedRunningTime="2025-11-29 07:20:06.309495213 +0000 UTC m=+1145.931571271" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.326638 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" event={"ID":"8912a20d-9515-4c18-8e19-009876be37d9","Type":"ContainerStarted","Data":"0da7e4911fff1524aabb54f50ff486519892462dc3b85e8968ae7d2082fc6e49"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.341098 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" event={"ID":"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf","Type":"ContainerStarted","Data":"0e328488717f2bca2c767cce79c724a3eaa06ff301b9b688c63fe286110b870b"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.359430 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" event={"ID":"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e","Type":"ContainerStarted","Data":"200bf26bb316f087cf713185f81d7e31897ad877413075285615ad920d08849f"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.384633 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r6mpw" podStartSLOduration=3.191487442 podStartE2EDuration="46.384592616s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.188761626 +0000 UTC m=+1101.810837694" lastFinishedPulling="2025-11-29 07:20:05.3818668 +0000 UTC m=+1145.003942868" observedRunningTime="2025-11-29 07:20:06.382451691 +0000 UTC m=+1146.004527759" watchObservedRunningTime="2025-11-29 07:20:06.384592616 +0000 UTC m=+1146.006668674" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.385200 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" podStartSLOduration=3.278338719 podStartE2EDuration="46.385193452s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.320018072 +0000 UTC m=+1101.942094130" lastFinishedPulling="2025-11-29 07:20:05.426872805 +0000 UTC m=+1145.048948863" observedRunningTime="2025-11-29 07:20:06.337438616 +0000 UTC m=+1145.959514674" watchObservedRunningTime="2025-11-29 07:20:06.385193452 +0000 UTC m=+1146.007269510" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.392682 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" event={"ID":"a4f6c7bc-09b0-4dda-bd88-76ee93e0a907","Type":"ContainerStarted","Data":"4be90818094012e0f7722a716fdace81c8bec362c797b5720bf89877a8f28db6"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.395495 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.401934 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.406005 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" event={"ID":"a764d93d-518d-46ef-b135-eae7f3b02985","Type":"ContainerStarted","Data":"3c14614ed6635b7e1b176e68f3bcacbc7693904a3ed89e3296c51e9c6465ef4d"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.413735 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" event={"ID":"57cd6967-e631-48d7-bbd4-856ac77f592b","Type":"ContainerStarted","Data":"7e4e1f152c3fec64bc16ccaf283e418678a63ce78aa5e5dbf16282dc8a30a796"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.422904 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" event={"ID":"f53d1403-e6c3-4696-bc32-7b711c38083e","Type":"ContainerStarted","Data":"3f4f4a2c5196d527f3c84761c3aed4e4bd6129333f4acee5382cf27b3d2189ad"} Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.424494 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.434035 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" podStartSLOduration=3.4608272700000002 podStartE2EDuration="46.434013865s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.454624064 +0000 UTC m=+1102.076700122" lastFinishedPulling="2025-11-29 07:20:05.427810659 +0000 UTC m=+1145.049886717" observedRunningTime="2025-11-29 07:20:06.427887866 +0000 UTC m=+1146.049963924" watchObservedRunningTime="2025-11-29 07:20:06.434013865 +0000 UTC m=+1146.056089923" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.442633 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.521737 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jrkpv" podStartSLOduration=4.180785094 podStartE2EDuration="47.521719674s" podCreationTimestamp="2025-11-29 07:19:19 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.930469073 +0000 UTC m=+1101.552545131" lastFinishedPulling="2025-11-29 07:20:05.271403653 +0000 UTC m=+1144.893479711" observedRunningTime="2025-11-29 07:20:06.519035515 +0000 UTC m=+1146.141111573" watchObservedRunningTime="2025-11-29 07:20:06.521719674 +0000 UTC m=+1146.143795732" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.627781 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xg8sj" podStartSLOduration=3.628091318 podStartE2EDuration="46.627755838s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.273646012 +0000 UTC m=+1101.895722070" lastFinishedPulling="2025-11-29 07:20:05.273310532 +0000 UTC m=+1144.895386590" observedRunningTime="2025-11-29 07:20:06.574559821 +0000 UTC m=+1146.196635889" watchObservedRunningTime="2025-11-29 07:20:06.627755838 +0000 UTC m=+1146.249831906" Nov 29 07:20:06 crc kubenswrapper[4828]: I1129 07:20:06.981234 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7769b678c8-gjkl6" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.431553 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" event={"ID":"8152b24c-fd27-443d-a35e-1ca6e4a5cf3e","Type":"ContainerStarted","Data":"27a12b16ea3a902553c622e795899f918cf26e1f2b4e6384cc523619527bf511"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.432708 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.433586 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" event={"ID":"98dc3704-84a8-46b5-aa13-f9de4ebde0a7","Type":"ContainerStarted","Data":"e0c79eeabbe6f84089ff2d320a9f45bb0cbddf16f0fa4071c4065e540b5aa50d"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.433689 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.438949 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" event={"ID":"a764d93d-518d-46ef-b135-eae7f3b02985","Type":"ContainerStarted","Data":"e9727a54694aeceddb78a33217e4fa8d9a228f3bfd1657215711c25f3bf961d5"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.439083 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.440742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" event={"ID":"29d5d952-52dc-4a17-8f00-fa65fda896d0","Type":"ContainerStarted","Data":"d34c67bf3fc94d17f896cc9150e57fcb4bc84b9f70d2396b7db6dbe2bfa70bcf"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.441162 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.442909 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.443742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" event={"ID":"ef13c53a-b7d2-46e7-aabc-37091112d6c6","Type":"ContainerStarted","Data":"017642b91d42ec8b8f0ca6acd3179329b5dcd57ecaa90ca7a549ec1cb469f999"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.443941 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.452023 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" podStartSLOduration=4.116171137 podStartE2EDuration="47.452001505s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.743615478 +0000 UTC m=+1101.365691536" lastFinishedPulling="2025-11-29 07:20:05.079445846 +0000 UTC m=+1144.701521904" observedRunningTime="2025-11-29 07:20:07.44754825 +0000 UTC m=+1147.069624308" watchObservedRunningTime="2025-11-29 07:20:07.452001505 +0000 UTC m=+1147.074077563" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.452925 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" event={"ID":"1048c045-97cc-4506-a0ad-48a8f47366e5","Type":"ContainerStarted","Data":"920ef808c9d98cf069f0d23d46d2ae682bf3837c07b63320dc16967129c9abfb"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.453386 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.461482 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.469175 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" event={"ID":"57cd6967-e631-48d7-bbd4-856ac77f592b","Type":"ContainerStarted","Data":"6e9e93fe9a609fd4b6bbff68ff02795c8afc64e2603c6d9f43fa945d94335c04"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.469706 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.472602 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s9ddc" podStartSLOduration=4.133241944 podStartE2EDuration="48.472585338s" podCreationTimestamp="2025-11-29 07:19:19 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.214332892 +0000 UTC m=+1100.836408950" lastFinishedPulling="2025-11-29 07:20:05.553676286 +0000 UTC m=+1145.175752344" observedRunningTime="2025-11-29 07:20:07.464223832 +0000 UTC m=+1147.086299890" watchObservedRunningTime="2025-11-29 07:20:07.472585338 +0000 UTC m=+1147.094661396" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.477558 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" event={"ID":"2793e6a5-22f6-4562-8253-c7c6993728fc","Type":"ContainerStarted","Data":"1fa4fc966be23636a7f84153a26899905ea575938b54237aa6792fe2a4b73733"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.478361 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.486518 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" event={"ID":"7264f040-a8ce-49f1-8422-0b5d03b79531","Type":"ContainerStarted","Data":"8ff25b3bed922f54248b38d03955e01c9ccbb6f06610d36cab996909ad52cd9f"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.487543 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.496521 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.511720 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" event={"ID":"a54ef84a-2f7d-47be-a9fd-699a627b3d91","Type":"ContainerStarted","Data":"399b6619c9996046ff02c9be196a9c377e7076bdce72d88e817503f86a0bbcde"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.512933 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.514257 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" podStartSLOduration=3.797824491 podStartE2EDuration="47.514246556s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.295744164 +0000 UTC m=+1101.917820222" lastFinishedPulling="2025-11-29 07:20:06.012166219 +0000 UTC m=+1145.634242287" observedRunningTime="2025-11-29 07:20:07.512587263 +0000 UTC m=+1147.134663321" watchObservedRunningTime="2025-11-29 07:20:07.514246556 +0000 UTC m=+1147.136322614" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.516973 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.527579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" event={"ID":"5d8c92ab-128c-41fa-8ae1-25b2c0776232","Type":"ContainerStarted","Data":"1079dcda5cafe1de931e1be5b6e2fc4143c9223009b3665c249b3d1f7815494a"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.528672 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.530799 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.535453 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" event={"ID":"0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d","Type":"ContainerStarted","Data":"a0fc5feb7eb1f1dc48d5e2dc1a7f148c1a656f6fd493874f7e8807b838a42c69"} Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.541211 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" podStartSLOduration=4.561660484 podStartE2EDuration="47.541195113s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.457474108 +0000 UTC m=+1102.079550166" lastFinishedPulling="2025-11-29 07:20:05.437008737 +0000 UTC m=+1145.059084795" observedRunningTime="2025-11-29 07:20:07.540884435 +0000 UTC m=+1147.162960503" watchObservedRunningTime="2025-11-29 07:20:07.541195113 +0000 UTC m=+1147.163271171" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.542231 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-8rrwm" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.568768 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" podStartSLOduration=5.843012569 podStartE2EDuration="47.568748406s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.249157059 +0000 UTC m=+1101.871233117" lastFinishedPulling="2025-11-29 07:20:03.974892896 +0000 UTC m=+1143.596968954" observedRunningTime="2025-11-29 07:20:07.566713014 +0000 UTC m=+1147.188789072" watchObservedRunningTime="2025-11-29 07:20:07.568748406 +0000 UTC m=+1147.190824464" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.602029 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-dkprw" podStartSLOduration=4.897025307 podStartE2EDuration="48.602012767s" podCreationTimestamp="2025-11-29 07:19:19 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.731582826 +0000 UTC m=+1101.353658874" lastFinishedPulling="2025-11-29 07:20:05.436570276 +0000 UTC m=+1145.058646334" observedRunningTime="2025-11-29 07:20:07.600625071 +0000 UTC m=+1147.222701129" watchObservedRunningTime="2025-11-29 07:20:07.602012767 +0000 UTC m=+1147.224088825" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.649721 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" podStartSLOduration=41.441888085 podStartE2EDuration="47.649700241s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:56.703382966 +0000 UTC m=+1136.325459024" lastFinishedPulling="2025-11-29 07:20:02.911195122 +0000 UTC m=+1142.533271180" observedRunningTime="2025-11-29 07:20:07.641342335 +0000 UTC m=+1147.263418393" watchObservedRunningTime="2025-11-29 07:20:07.649700241 +0000 UTC m=+1147.271776299" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.680946 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-c98mh" podStartSLOduration=3.9467581640000002 podStartE2EDuration="47.680927319s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.249155748 +0000 UTC m=+1101.871231806" lastFinishedPulling="2025-11-29 07:20:05.983324903 +0000 UTC m=+1145.605400961" observedRunningTime="2025-11-29 07:20:07.666821014 +0000 UTC m=+1147.288897072" watchObservedRunningTime="2025-11-29 07:20:07.680927319 +0000 UTC m=+1147.303003377" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.713512 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-xtdv5" podStartSLOduration=4.044977396 podStartE2EDuration="47.713492341s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.746478312 +0000 UTC m=+1101.368554370" lastFinishedPulling="2025-11-29 07:20:05.414993267 +0000 UTC m=+1145.037069315" observedRunningTime="2025-11-29 07:20:07.708926633 +0000 UTC m=+1147.331002691" watchObservedRunningTime="2025-11-29 07:20:07.713492341 +0000 UTC m=+1147.335568399" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.847447 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mvwrz" podStartSLOduration=4.737074859 podStartE2EDuration="48.847424037s" podCreationTimestamp="2025-11-29 07:19:19 +0000 UTC" firstStartedPulling="2025-11-29 07:19:21.375470242 +0000 UTC m=+1100.997546300" lastFinishedPulling="2025-11-29 07:20:05.48581942 +0000 UTC m=+1145.107895478" observedRunningTime="2025-11-29 07:20:07.84481763 +0000 UTC m=+1147.466893708" watchObservedRunningTime="2025-11-29 07:20:07.847424037 +0000 UTC m=+1147.469500095" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.893338 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" podStartSLOduration=4.232929929 podStartE2EDuration="47.893310054s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.319351715 +0000 UTC m=+1101.941427773" lastFinishedPulling="2025-11-29 07:20:05.97973184 +0000 UTC m=+1145.601807898" observedRunningTime="2025-11-29 07:20:07.885910193 +0000 UTC m=+1147.507986251" watchObservedRunningTime="2025-11-29 07:20:07.893310054 +0000 UTC m=+1147.515386122" Nov 29 07:20:07 crc kubenswrapper[4828]: I1129 07:20:07.916573 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" podStartSLOduration=4.325443763 podStartE2EDuration="47.916548606s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.278078217 +0000 UTC m=+1101.900154285" lastFinishedPulling="2025-11-29 07:20:05.86918307 +0000 UTC m=+1145.491259128" observedRunningTime="2025-11-29 07:20:07.91169562 +0000 UTC m=+1147.533771678" watchObservedRunningTime="2025-11-29 07:20:07.916548606 +0000 UTC m=+1147.538624664" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.542701 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" event={"ID":"b680fae3-b615-465f-bea9-d61a847a6038","Type":"ContainerStarted","Data":"70ee99a9eae3ef54f77548a1a20cfc5a2d96127550dbb103c27addcd9518b77e"} Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.542872 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.545244 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" event={"ID":"8912a20d-9515-4c18-8e19-009876be37d9","Type":"ContainerStarted","Data":"3c60d3c1d1744b1ffa2540a151d983e8197b969f4b110d9b5cb4f68b0434f231"} Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.545342 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.547081 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" event={"ID":"741effc8-8c8a-420e-b6c0-0b62ebc9bdbf","Type":"ContainerStarted","Data":"1314abdc965f200c964fa235f40e868f9f23cd66f1e2c1cc3f12076b1f94f438"} Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.548512 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.553165 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-kzq8x" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.553587 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-zr4sc" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.590458 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" podStartSLOduration=43.955936055 podStartE2EDuration="48.590433483s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:20:00.624373171 +0000 UTC m=+1140.246449229" lastFinishedPulling="2025-11-29 07:20:05.258870599 +0000 UTC m=+1144.880946657" observedRunningTime="2025-11-29 07:20:08.578447243 +0000 UTC m=+1148.200523321" watchObservedRunningTime="2025-11-29 07:20:08.590433483 +0000 UTC m=+1148.212509541" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.601438 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" podStartSLOduration=5.459548779 podStartE2EDuration="48.601396867s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.27240482 +0000 UTC m=+1101.894480878" lastFinishedPulling="2025-11-29 07:20:05.414252908 +0000 UTC m=+1145.036328966" observedRunningTime="2025-11-29 07:20:08.598399749 +0000 UTC m=+1148.220475817" watchObservedRunningTime="2025-11-29 07:20:08.601396867 +0000 UTC m=+1148.223472935" Nov 29 07:20:08 crc kubenswrapper[4828]: I1129 07:20:08.617253 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" podStartSLOduration=4.012061394 podStartE2EDuration="48.617235776s" podCreationTimestamp="2025-11-29 07:19:20 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.274237548 +0000 UTC m=+1101.896313616" lastFinishedPulling="2025-11-29 07:20:06.87941193 +0000 UTC m=+1146.501487998" observedRunningTime="2025-11-29 07:20:08.615394819 +0000 UTC m=+1148.237470907" watchObservedRunningTime="2025-11-29 07:20:08.617235776 +0000 UTC m=+1148.239311834" Nov 29 07:20:09 crc kubenswrapper[4828]: I1129 07:20:09.554333 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:20:09 crc kubenswrapper[4828]: I1129 07:20:09.556352 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-s9v88" Nov 29 07:20:10 crc kubenswrapper[4828]: I1129 07:20:10.383779 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-f569bc5bd-7n76r" Nov 29 07:20:10 crc kubenswrapper[4828]: I1129 07:20:10.778381 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jsbsw" Nov 29 07:20:10 crc kubenswrapper[4828]: I1129 07:20:10.826410 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-2hkcb" Nov 29 07:20:10 crc kubenswrapper[4828]: I1129 07:20:10.869573 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-69cvg" Nov 29 07:20:10 crc kubenswrapper[4828]: I1129 07:20:10.895988 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-s887g" Nov 29 07:20:16 crc kubenswrapper[4828]: I1129 07:20:16.356074 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ntfvp" Nov 29 07:20:16 crc kubenswrapper[4828]: I1129 07:20:16.560180 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b" Nov 29 07:20:21 crc kubenswrapper[4828]: I1129 07:20:21.170888 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-pkfzx" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.667286 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.669774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.672836 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.673217 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.675119 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.675306 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8kvqx" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.689534 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.746377 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.747949 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.749855 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.762366 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.838311 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.838593 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wf5\" (UniqueName: \"kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.940246 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wf5\" (UniqueName: \"kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.940674 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.940727 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.940768 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.940811 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tnv\" (UniqueName: \"kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.941858 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.961831 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wf5\" (UniqueName: \"kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5\") pod \"dnsmasq-dns-675f4bcbfc-j58pp\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:35 crc kubenswrapper[4828]: I1129 07:20:35.990068 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.042141 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.042242 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.042321 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tnv\" (UniqueName: \"kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.043357 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.043505 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.063189 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tnv\" (UniqueName: \"kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv\") pod \"dnsmasq-dns-78dd6ddcc-6v4l6\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.361815 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.458570 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.766239 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" event={"ID":"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab","Type":"ContainerStarted","Data":"52fdac7e18b86c51129b0bf51ad37f6ff5fe5ee6766e9857ab40e4dfb6f78ccb"} Nov 29 07:20:36 crc kubenswrapper[4828]: I1129 07:20:36.779485 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:20:36 crc kubenswrapper[4828]: W1129 07:20:36.783352 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6769fd04_c7fe_4667_96b5_52414f299b7a.slice/crio-710fa6ca97a8a13639f6af7212aa58920301d7b32d1358d66fd302dbbf8602bb WatchSource:0}: Error finding container 710fa6ca97a8a13639f6af7212aa58920301d7b32d1358d66fd302dbbf8602bb: Status 404 returned error can't find the container with id 710fa6ca97a8a13639f6af7212aa58920301d7b32d1358d66fd302dbbf8602bb Nov 29 07:20:37 crc kubenswrapper[4828]: I1129 07:20:37.774310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" event={"ID":"6769fd04-c7fe-4667-96b5-52414f299b7a","Type":"ContainerStarted","Data":"710fa6ca97a8a13639f6af7212aa58920301d7b32d1358d66fd302dbbf8602bb"} Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.795035 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.822822 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.824123 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.840298 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.986224 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.986344 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jjb\" (UniqueName: \"kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:38 crc kubenswrapper[4828]: I1129 07:20:38.986394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.070590 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.087809 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.087887 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.087959 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jjb\" (UniqueName: \"kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.088730 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.088742 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.108090 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jjb\" (UniqueName: \"kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb\") pod \"dnsmasq-dns-666b6646f7-gx7c7\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.108646 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.111788 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.117190 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.145138 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.291959 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.292339 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjrs4\" (UniqueName: \"kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.292379 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.393649 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjrs4\" (UniqueName: \"kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.393714 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.393785 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.395049 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.395137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.418735 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjrs4\" (UniqueName: \"kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4\") pod \"dnsmasq-dns-57d769cc4f-smsnx\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.453792 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.631500 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:20:39 crc kubenswrapper[4828]: I1129 07:20:39.788675 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" event={"ID":"667169da-8564-4c09-8be0-f50d1cce0888","Type":"ContainerStarted","Data":"4a5ec250acb21a4ad3f44080f536b84015b5fc233bb1ae8a1458218fb3fda182"} Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.140824 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:20:40 crc kubenswrapper[4828]: W1129 07:20:40.144400 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod024941c4_acae_45c4_9347_3c981d7a0348.slice/crio-f7c77a8d0cc8ee1024b58218abd48ee235fc69c549783573a68246ff3e0794be WatchSource:0}: Error finding container f7c77a8d0cc8ee1024b58218abd48ee235fc69c549783573a68246ff3e0794be: Status 404 returned error can't find the container with id f7c77a8d0cc8ee1024b58218abd48ee235fc69c549783573a68246ff3e0794be Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.222838 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.249442 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.251462 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.263555 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.263702 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.263974 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zfnnk" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.264283 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.264565 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.264740 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.264925 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.265986 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.270124 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.270788 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.271325 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.271708 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.271965 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x6wkx" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.272187 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.272409 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.276374 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.285455 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.408974 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409019 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409045 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409063 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409086 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409117 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409136 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409154 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409194 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409211 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409233 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409249 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrp8p\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409292 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409307 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409407 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409452 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409470 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsc82\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409487 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409610 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409661 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409688 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.409778 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511139 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511220 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511287 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511313 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511331 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511354 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511377 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511394 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511410 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511430 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511443 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511466 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrp8p\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511513 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511557 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511579 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511602 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511622 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511637 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsc82\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511653 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.511697 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.512322 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.512571 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.512615 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.512953 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.513223 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.513290 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.513737 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.513829 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.514138 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.515131 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.515382 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.515615 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.517507 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.518086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.518586 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.518986 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.521838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.528869 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.529497 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.531448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrp8p\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.533593 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsc82\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.534344 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.538553 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.547047 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.590891 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.608360 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:20:40 crc kubenswrapper[4828]: I1129 07:20:40.805922 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" event={"ID":"024941c4-acae-45c4-9347-3c981d7a0348","Type":"ContainerStarted","Data":"f7c77a8d0cc8ee1024b58218abd48ee235fc69c549783573a68246ff3e0794be"} Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.057860 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:20:41 crc kubenswrapper[4828]: W1129 07:20:41.065819 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23acf022_f4ef_4a49_8771_e07792440c6c.slice/crio-3c2bcdecd74c631078ac649e66a993815c91aacad13fd3de075dfcb47053c99b WatchSource:0}: Error finding container 3c2bcdecd74c631078ac649e66a993815c91aacad13fd3de075dfcb47053c99b: Status 404 returned error can't find the container with id 3c2bcdecd74c631078ac649e66a993815c91aacad13fd3de075dfcb47053c99b Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.130136 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.487480 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.487978 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.705297 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.706698 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.709238 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-shpsr" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.709545 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.709671 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.710551 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.716832 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.723999 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.842954 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843008 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843114 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843156 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843244 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmg5j\" (UniqueName: \"kubernetes.io/projected/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kube-api-access-rmg5j\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843398 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.843424 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.851970 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerStarted","Data":"128a2b71d52255617957dac1d3543a6829f892722b759505203d6ba5f156019a"} Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.854654 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerStarted","Data":"3c2bcdecd74c631078ac649e66a993815c91aacad13fd3de075dfcb47053c99b"} Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947471 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947546 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947627 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947682 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947729 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmg5j\" (UniqueName: \"kubernetes.io/projected/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kube-api-access-rmg5j\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947840 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947872 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.949598 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.949863 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.947832 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.950682 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:41 crc kubenswrapper[4828]: I1129 07:20:41.951035 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:41.961168 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:41.961655 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:42.012306 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmg5j\" (UniqueName: \"kubernetes.io/projected/bb49e4ad-de75-4a14-bbf3-f5bd0099add6-kube-api-access-rmg5j\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:42.027542 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"bb49e4ad-de75-4a14-bbf3-f5bd0099add6\") " pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:42.057083 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:20:42 crc kubenswrapper[4828]: I1129 07:20:42.863352 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.111106 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.112760 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.117058 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-svvd4" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.122965 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.123078 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.122974 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.124303 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275695 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275753 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275859 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4sm5\" (UniqueName: \"kubernetes.io/projected/f86097ba-a57f-4f34-8668-dc1daef612da-kube-api-access-p4sm5\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275913 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.275989 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.276021 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.276056 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377753 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377836 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377885 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4sm5\" (UniqueName: \"kubernetes.io/projected/f86097ba-a57f-4f34-8668-dc1daef612da-kube-api-access-p4sm5\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377929 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377965 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.377985 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.378024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.378047 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.378480 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.381871 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.382849 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.382921 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.384423 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f86097ba-a57f-4f34-8668-dc1daef612da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.394407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.401322 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86097ba-a57f-4f34-8668-dc1daef612da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.408931 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4sm5\" (UniqueName: \"kubernetes.io/projected/f86097ba-a57f-4f34-8668-dc1daef612da-kube-api-access-p4sm5\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.434637 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f86097ba-a57f-4f34-8668-dc1daef612da\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.514857 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.516224 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.531981 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.532077 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.532230 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2ptpm" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.551996 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.687039 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.687101 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.687142 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-kolla-config\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.687244 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csfcl\" (UniqueName: \"kubernetes.io/projected/8f120af9-3005-49a2-9099-818ef49164dc-kube-api-access-csfcl\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.687482 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-config-data\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.735868 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.790405 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-config-data\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.790572 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.790626 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.790657 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-kolla-config\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.790686 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csfcl\" (UniqueName: \"kubernetes.io/projected/8f120af9-3005-49a2-9099-818ef49164dc-kube-api-access-csfcl\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.802483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-kolla-config\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.807907 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f120af9-3005-49a2-9099-818ef49164dc-config-data\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.817092 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.845606 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f120af9-3005-49a2-9099-818ef49164dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.860661 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csfcl\" (UniqueName: \"kubernetes.io/projected/8f120af9-3005-49a2-9099-818ef49164dc-kube-api-access-csfcl\") pod \"memcached-0\" (UID: \"8f120af9-3005-49a2-9099-818ef49164dc\") " pod="openstack/memcached-0" Nov 29 07:20:43 crc kubenswrapper[4828]: I1129 07:20:43.932589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb49e4ad-de75-4a14-bbf3-f5bd0099add6","Type":"ContainerStarted","Data":"b7f821ce1c317ba18b7f14d358b68bd63f91c7a3ac2c4c08c5703f894a9cd048"} Nov 29 07:20:44 crc kubenswrapper[4828]: I1129 07:20:44.161117 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.032700 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.063127 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.603630 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.605421 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.615013 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mzf57" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.630016 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.697074 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kzw\" (UniqueName: \"kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw\") pod \"kube-state-metrics-0\" (UID: \"da136d32-fe97-49ae-b9eb-c94dda775a13\") " pod="openstack/kube-state-metrics-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.798638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kzw\" (UniqueName: \"kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw\") pod \"kube-state-metrics-0\" (UID: \"da136d32-fe97-49ae-b9eb-c94dda775a13\") " pod="openstack/kube-state-metrics-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.834092 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kzw\" (UniqueName: \"kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw\") pod \"kube-state-metrics-0\" (UID: \"da136d32-fe97-49ae-b9eb-c94dda775a13\") " pod="openstack/kube-state-metrics-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.958305 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:20:45 crc kubenswrapper[4828]: I1129 07:20:45.978340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f120af9-3005-49a2-9099-818ef49164dc","Type":"ContainerStarted","Data":"f77e72089824fe3c121953cb7b7bc83c56e2c68b363abd55d6fde674c3aa97ed"} Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.916245 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-twdtp"] Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.921538 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.929724 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.930435 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.930550 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-knwvq" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.933648 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-twdtp"] Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954048 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954095 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-combined-ca-bundle\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954159 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-log-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954181 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfdn7\" (UniqueName: \"kubernetes.io/projected/5197fd5f-121f-4085-8985-a8e31ee8f997-kube-api-access-gfdn7\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954223 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954258 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5197fd5f-121f-4085-8985-a8e31ee8f997-scripts\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.954310 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-ovn-controller-tls-certs\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.963498 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hhg6w"] Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.970280 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:48 crc kubenswrapper[4828]: I1129 07:20:48.978693 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hhg6w"] Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.054993 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-run\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.055078 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-log-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.055097 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfdn7\" (UniqueName: \"kubernetes.io/projected/5197fd5f-121f-4085-8985-a8e31ee8f997-kube-api-access-gfdn7\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.055127 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48706635-ba41-45a3-8167-56c05555f0d2-scripts\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056076 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-log-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056123 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056156 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5197fd5f-121f-4085-8985-a8e31ee8f997-scripts\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056178 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-ovn-controller-tls-certs\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056197 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-log\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056300 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-lib\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056332 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056349 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-combined-ca-bundle\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056366 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-etc-ovs\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056410 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lv2p\" (UniqueName: \"kubernetes.io/projected/48706635-ba41-45a3-8167-56c05555f0d2-kube-api-access-6lv2p\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.056646 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.058123 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5197fd5f-121f-4085-8985-a8e31ee8f997-var-run-ovn\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.059015 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5197fd5f-121f-4085-8985-a8e31ee8f997-scripts\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.063281 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-ovn-controller-tls-certs\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.063803 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5197fd5f-121f-4085-8985-a8e31ee8f997-combined-ca-bundle\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.071657 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfdn7\" (UniqueName: \"kubernetes.io/projected/5197fd5f-121f-4085-8985-a8e31ee8f997-kube-api-access-gfdn7\") pod \"ovn-controller-twdtp\" (UID: \"5197fd5f-121f-4085-8985-a8e31ee8f997\") " pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.156978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48706635-ba41-45a3-8167-56c05555f0d2-scripts\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.159717 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-log\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.160013 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-lib\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.160173 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-etc-ovs\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.160311 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lv2p\" (UniqueName: \"kubernetes.io/projected/48706635-ba41-45a3-8167-56c05555f0d2-kube-api-access-6lv2p\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.160434 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-run\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.160754 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-run\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.159599 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48706635-ba41-45a3-8167-56c05555f0d2-scripts\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.161076 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-lib\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.161178 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-etc-ovs\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.161184 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48706635-ba41-45a3-8167-56c05555f0d2-var-log\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.180993 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lv2p\" (UniqueName: \"kubernetes.io/projected/48706635-ba41-45a3-8167-56c05555f0d2-kube-api-access-6lv2p\") pod \"ovn-controller-ovs-hhg6w\" (UID: \"48706635-ba41-45a3-8167-56c05555f0d2\") " pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.258099 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.285066 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.784350 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.786178 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.791897 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.791897 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.792452 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.793716 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n8qsx" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.802153 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.809872 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873430 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873626 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873654 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj79d\" (UniqueName: \"kubernetes.io/projected/cc069e9b-6fbd-427b-bc62-b99d31c5292d-kube-api-access-sj79d\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873685 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873760 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.873786 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.874044 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.874279 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.974934 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj79d\" (UniqueName: \"kubernetes.io/projected/cc069e9b-6fbd-427b-bc62-b99d31c5292d-kube-api-access-sj79d\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975001 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975037 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975064 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975111 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975157 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975209 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975244 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975844 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.975980 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.976352 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.977391 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc069e9b-6fbd-427b-bc62-b99d31c5292d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.979547 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:49 crc kubenswrapper[4828]: I1129 07:20:49.979872 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:50 crc kubenswrapper[4828]: I1129 07:20:50.000761 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc069e9b-6fbd-427b-bc62-b99d31c5292d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:50 crc kubenswrapper[4828]: I1129 07:20:50.008685 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj79d\" (UniqueName: \"kubernetes.io/projected/cc069e9b-6fbd-427b-bc62-b99d31c5292d-kube-api-access-sj79d\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:50 crc kubenswrapper[4828]: I1129 07:20:50.014733 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc069e9b-6fbd-427b-bc62-b99d31c5292d\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:50 crc kubenswrapper[4828]: I1129 07:20:50.120492 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:20:50 crc kubenswrapper[4828]: W1129 07:20:50.515696 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf86097ba_a57f_4f34_8668_dc1daef612da.slice/crio-b9b2768cb304b8fcd9597659952afa41e3fb1adbe82145f6bad7c9c63e33dbc9 WatchSource:0}: Error finding container b9b2768cb304b8fcd9597659952afa41e3fb1adbe82145f6bad7c9c63e33dbc9: Status 404 returned error can't find the container with id b9b2768cb304b8fcd9597659952afa41e3fb1adbe82145f6bad7c9c63e33dbc9 Nov 29 07:20:51 crc kubenswrapper[4828]: I1129 07:20:51.027702 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f86097ba-a57f-4f34-8668-dc1daef612da","Type":"ContainerStarted","Data":"b9b2768cb304b8fcd9597659952afa41e3fb1adbe82145f6bad7c9c63e33dbc9"} Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.144689 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.146136 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.147764 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.148100 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.148370 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.150727 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tv966" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.161893 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164442 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164477 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gc5\" (UniqueName: \"kubernetes.io/projected/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-kube-api-access-b5gc5\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164500 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164525 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164550 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164830 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.164872 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266479 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266534 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266688 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gc5\" (UniqueName: \"kubernetes.io/projected/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-kube-api-access-b5gc5\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.266804 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.267199 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.267227 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.267847 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.267977 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.273789 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.274002 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.274002 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.295702 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gc5\" (UniqueName: \"kubernetes.io/projected/e2df4c7c-de4a-48b4-99b8-e66672e38e3d-kube-api-access-b5gc5\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.301990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2df4c7c-de4a-48b4-99b8-e66672e38e3d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:20:53 crc kubenswrapper[4828]: I1129 07:20:53.466903 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:21:11 crc kubenswrapper[4828]: I1129 07:21:11.486716 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:11 crc kubenswrapper[4828]: I1129 07:21:11.487673 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.260908 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.261595 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2wf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-j58pp_openstack(25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.262865 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" podUID="25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.600471 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.600626 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4jjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-gx7c7_openstack(667169da-8564-4c09-8be0-f50d1cce0888): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:13 crc kubenswrapper[4828]: E1129 07:21:13.601874 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" podUID="667169da-8564-4c09-8be0-f50d1cce0888" Nov 29 07:21:14 crc kubenswrapper[4828]: E1129 07:21:14.201339 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" podUID="667169da-8564-4c09-8be0-f50d1cce0888" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.145791 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.146573 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4sm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f86097ba-a57f-4f34-8668-dc1daef612da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.148352 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f86097ba-a57f-4f34-8668-dc1daef612da" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.173378 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.173557 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmg5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(bb49e4ad-de75-4a14-bbf3-f5bd0099add6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.174722 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="bb49e4ad-de75-4a14-bbf3-f5bd0099add6" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.198637 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.198796 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjrs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-smsnx_openstack(024941c4-acae-45c4-9347-3c981d7a0348): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.200841 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" podUID="024941c4-acae-45c4-9347-3c981d7a0348" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.206637 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f86097ba-a57f-4f34-8668-dc1daef612da" Nov 29 07:21:15 crc kubenswrapper[4828]: E1129 07:21:15.213516 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="bb49e4ad-de75-4a14-bbf3-f5bd0099add6" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.157990 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.158238 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsc82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(5e6d36a9-09a5-45d6-bae5-89a977408440): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.159429 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.192175 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.192421 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrp8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(23acf022-f4ef-4a49-8771-e07792440c6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.193756 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.214185 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.214608 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.214917 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" podUID="024941c4-acae-45c4-9347-3c981d7a0348" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.847712 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.848327 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n544h555h68h5bdhb9h95h57fh5d7hfbh645h97h576h5b6h558h98h58fhchbh575hbh7dh7fhb7h5c5hd8h65h65dh5ch577h54ch59bh5fbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csfcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(8f120af9-3005-49a2-9099-818ef49164dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.849496 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="8f120af9-3005-49a2-9099-818ef49164dc" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.904664 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.904920 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2tnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6v4l6_openstack(6769fd04-c7fe-4667-96b5-52414f299b7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:21:16 crc kubenswrapper[4828]: E1129 07:21:16.906388 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" podUID="6769fd04-c7fe-4667-96b5-52414f299b7a" Nov 29 07:21:16 crc kubenswrapper[4828]: I1129 07:21:16.941054 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.078993 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2wf5\" (UniqueName: \"kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5\") pod \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.079482 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config\") pod \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\" (UID: \"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab\") " Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.081210 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config" (OuterVolumeSpecName: "config") pod "25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab" (UID: "25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.092769 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5" (OuterVolumeSpecName: "kube-api-access-z2wf5") pod "25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab" (UID: "25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab"). InnerVolumeSpecName "kube-api-access-z2wf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.181933 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2wf5\" (UniqueName: \"kubernetes.io/projected/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-kube-api-access-z2wf5\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.182024 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.221637 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.223170 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-j58pp" event={"ID":"25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab","Type":"ContainerDied","Data":"52fdac7e18b86c51129b0bf51ad37f6ff5fe5ee6766e9857ab40e4dfb6f78ccb"} Nov 29 07:21:17 crc kubenswrapper[4828]: E1129 07:21:17.225455 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="8f120af9-3005-49a2-9099-818ef49164dc" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.326927 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.335460 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j58pp"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.455665 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab" path="/var/lib/kubelet/pods/25fcdfdb-4094-4ef6-8e57-8fe5aebb56ab/volumes" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.456398 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.515707 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-twdtp"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.692774 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.718880 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.735234 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.819080 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hhg6w"] Nov 29 07:21:17 crc kubenswrapper[4828]: W1129 07:21:17.822650 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48706635_ba41_45a3_8167_56c05555f0d2.slice/crio-dc36bdee9c8adaf2adf6268859b319c3b168ec37c18065340f48abb27df8551d WatchSource:0}: Error finding container dc36bdee9c8adaf2adf6268859b319c3b168ec37c18065340f48abb27df8551d: Status 404 returned error can't find the container with id dc36bdee9c8adaf2adf6268859b319c3b168ec37c18065340f48abb27df8551d Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.905706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config\") pod \"6769fd04-c7fe-4667-96b5-52414f299b7a\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.905801 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc\") pod \"6769fd04-c7fe-4667-96b5-52414f299b7a\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.905904 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2tnv\" (UniqueName: \"kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv\") pod \"6769fd04-c7fe-4667-96b5-52414f299b7a\" (UID: \"6769fd04-c7fe-4667-96b5-52414f299b7a\") " Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.906437 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config" (OuterVolumeSpecName: "config") pod "6769fd04-c7fe-4667-96b5-52414f299b7a" (UID: "6769fd04-c7fe-4667-96b5-52414f299b7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.906453 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6769fd04-c7fe-4667-96b5-52414f299b7a" (UID: "6769fd04-c7fe-4667-96b5-52414f299b7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:17 crc kubenswrapper[4828]: I1129 07:21:17.912472 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv" (OuterVolumeSpecName: "kube-api-access-n2tnv") pod "6769fd04-c7fe-4667-96b5-52414f299b7a" (UID: "6769fd04-c7fe-4667-96b5-52414f299b7a"). InnerVolumeSpecName "kube-api-access-n2tnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.008195 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.008239 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6769fd04-c7fe-4667-96b5-52414f299b7a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.008248 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2tnv\" (UniqueName: \"kubernetes.io/projected/6769fd04-c7fe-4667-96b5-52414f299b7a-kube-api-access-n2tnv\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.229653 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"da136d32-fe97-49ae-b9eb-c94dda775a13","Type":"ContainerStarted","Data":"0e55e9c6bd716378be99154e1987c48f40faef6c92ef6a885e39814c7c5f204d"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.231189 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.231185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6v4l6" event={"ID":"6769fd04-c7fe-4667-96b5-52414f299b7a","Type":"ContainerDied","Data":"710fa6ca97a8a13639f6af7212aa58920301d7b32d1358d66fd302dbbf8602bb"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.233139 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2df4c7c-de4a-48b4-99b8-e66672e38e3d","Type":"ContainerStarted","Data":"cf5ef3ac94bfa2fdbf143a944d183eaab38057688a76e949de8ee9b9448b27e1"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.234611 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc069e9b-6fbd-427b-bc62-b99d31c5292d","Type":"ContainerStarted","Data":"5be961f5fc196e5d1f279169d29e8b79737ccf09fd6f931ab95958dc7f55dd0a"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.236078 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp" event={"ID":"5197fd5f-121f-4085-8985-a8e31ee8f997","Type":"ContainerStarted","Data":"500a7a95a2378ce78e9bca756703606ea25fae09ffe2a76f3a6b352a93d95bee"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.237799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hhg6w" event={"ID":"48706635-ba41-45a3-8167-56c05555f0d2","Type":"ContainerStarted","Data":"dc36bdee9c8adaf2adf6268859b319c3b168ec37c18065340f48abb27df8551d"} Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.335324 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:21:18 crc kubenswrapper[4828]: I1129 07:21:18.341879 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6v4l6"] Nov 29 07:21:19 crc kubenswrapper[4828]: I1129 07:21:19.421234 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6769fd04-c7fe-4667-96b5-52414f299b7a" path="/var/lib/kubelet/pods/6769fd04-c7fe-4667-96b5-52414f299b7a/volumes" Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.285770 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2df4c7c-de4a-48b4-99b8-e66672e38e3d","Type":"ContainerStarted","Data":"37bb73e9144e4f92f9a64da8398bf9079b82adaa3b9772635c52b0217238731b"} Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.288370 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc069e9b-6fbd-427b-bc62-b99d31c5292d","Type":"ContainerStarted","Data":"383d353eff70aa605b05fa0fb0e753a8ec826a87e8b44cf358d81f511f7825df"} Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.291593 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp" event={"ID":"5197fd5f-121f-4085-8985-a8e31ee8f997","Type":"ContainerStarted","Data":"e8336c634ccca8b3d31f995525a6595dbf0ea7a1264724ad5fd96f1ac65c8acc"} Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.292011 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-twdtp" Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.293861 4828 generic.go:334] "Generic (PLEG): container finished" podID="48706635-ba41-45a3-8167-56c05555f0d2" containerID="e3e16e61c6fbe3d8b882a7efcd1f2d1b7c2b1eb70ebc48ce647667ed8504fe13" exitCode=0 Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.293952 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hhg6w" event={"ID":"48706635-ba41-45a3-8167-56c05555f0d2","Type":"ContainerDied","Data":"e3e16e61c6fbe3d8b882a7efcd1f2d1b7c2b1eb70ebc48ce647667ed8504fe13"} Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.297904 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"da136d32-fe97-49ae-b9eb-c94dda775a13","Type":"ContainerStarted","Data":"f4dcf140536ad3e36b817202f8bb975b0fd7e7879bc7cbdc96e57a8140a803f5"} Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.298067 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.340975 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-twdtp" podStartSLOduration=30.831767382 podStartE2EDuration="36.340921515s" podCreationTimestamp="2025-11-29 07:20:48 +0000 UTC" firstStartedPulling="2025-11-29 07:21:17.567062127 +0000 UTC m=+1217.189138185" lastFinishedPulling="2025-11-29 07:21:23.07621626 +0000 UTC m=+1222.698292318" observedRunningTime="2025-11-29 07:21:24.316975632 +0000 UTC m=+1223.939051690" watchObservedRunningTime="2025-11-29 07:21:24.340921515 +0000 UTC m=+1223.962997573" Nov 29 07:21:24 crc kubenswrapper[4828]: I1129 07:21:24.347728 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=33.733848975 podStartE2EDuration="39.347705339s" podCreationTimestamp="2025-11-29 07:20:45 +0000 UTC" firstStartedPulling="2025-11-29 07:21:17.541481582 +0000 UTC m=+1217.163557640" lastFinishedPulling="2025-11-29 07:21:23.155337946 +0000 UTC m=+1222.777414004" observedRunningTime="2025-11-29 07:21:24.334936652 +0000 UTC m=+1223.957012710" watchObservedRunningTime="2025-11-29 07:21:24.347705339 +0000 UTC m=+1223.969781397" Nov 29 07:21:25 crc kubenswrapper[4828]: I1129 07:21:25.307016 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hhg6w" event={"ID":"48706635-ba41-45a3-8167-56c05555f0d2","Type":"ContainerStarted","Data":"1cbaab4b2ab99727928fa745813fd73f4411a404bb621c40c23cef5f9f28a8f2"} Nov 29 07:21:25 crc kubenswrapper[4828]: I1129 07:21:25.307361 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hhg6w" event={"ID":"48706635-ba41-45a3-8167-56c05555f0d2","Type":"ContainerStarted","Data":"8c7724d0dd4730f73ec783ff6544372da45df1ca710b652e6835e8cd42ffa3c1"} Nov 29 07:21:25 crc kubenswrapper[4828]: I1129 07:21:25.330074 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hhg6w" podStartSLOduration=32.077990124 podStartE2EDuration="37.330052404s" podCreationTimestamp="2025-11-29 07:20:48 +0000 UTC" firstStartedPulling="2025-11-29 07:21:17.824160841 +0000 UTC m=+1217.446236899" lastFinishedPulling="2025-11-29 07:21:23.076223131 +0000 UTC m=+1222.698299179" observedRunningTime="2025-11-29 07:21:25.326294848 +0000 UTC m=+1224.948370906" watchObservedRunningTime="2025-11-29 07:21:25.330052404 +0000 UTC m=+1224.952128462" Nov 29 07:21:26 crc kubenswrapper[4828]: I1129 07:21:26.314902 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:21:26 crc kubenswrapper[4828]: I1129 07:21:26.314962 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:21:27 crc kubenswrapper[4828]: I1129 07:21:27.324959 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc069e9b-6fbd-427b-bc62-b99d31c5292d","Type":"ContainerStarted","Data":"331780c08008247a1862ec2bc5715620df98e43ee9a0c18f3f311a8acd7358c6"} Nov 29 07:21:27 crc kubenswrapper[4828]: I1129 07:21:27.327082 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2df4c7c-de4a-48b4-99b8-e66672e38e3d","Type":"ContainerStarted","Data":"9ff11bc4a3a0a49e925bb742e6d214e79cef2ee81e82ad9a2461a854357f9124"} Nov 29 07:21:27 crc kubenswrapper[4828]: I1129 07:21:27.354006 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.299350688 podStartE2EDuration="39.35398381s" podCreationTimestamp="2025-11-29 07:20:48 +0000 UTC" firstStartedPulling="2025-11-29 07:21:17.70817004 +0000 UTC m=+1217.330246098" lastFinishedPulling="2025-11-29 07:21:26.762803122 +0000 UTC m=+1226.384879220" observedRunningTime="2025-11-29 07:21:27.346491148 +0000 UTC m=+1226.968567216" watchObservedRunningTime="2025-11-29 07:21:27.35398381 +0000 UTC m=+1226.976059868" Nov 29 07:21:27 crc kubenswrapper[4828]: I1129 07:21:27.367743 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.305448255 podStartE2EDuration="35.367725832s" podCreationTimestamp="2025-11-29 07:20:52 +0000 UTC" firstStartedPulling="2025-11-29 07:21:17.735287145 +0000 UTC m=+1217.357363203" lastFinishedPulling="2025-11-29 07:21:26.797564682 +0000 UTC m=+1226.419640780" observedRunningTime="2025-11-29 07:21:27.36606419 +0000 UTC m=+1226.988140248" watchObservedRunningTime="2025-11-29 07:21:27.367725832 +0000 UTC m=+1226.989801890" Nov 29 07:21:28 crc kubenswrapper[4828]: I1129 07:21:28.468095 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.121135 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.165109 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.343911 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.377733 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.467833 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.528234 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.748069 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.811725 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.813504 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.825169 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.825573 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.896604 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rhxqt"] Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.898434 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.904517 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.908706 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.908751 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkz5\" (UniqueName: \"kubernetes.io/projected/58916077-c611-4cd6-9b53-b668fa2abb47-kube-api-access-4bkz5\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.908775 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.908814 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910007 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58916077-c611-4cd6-9b53-b668fa2abb47-config\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910077 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovs-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910099 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovn-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910123 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvnqj\" (UniqueName: \"kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910145 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.910171 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-combined-ca-bundle\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:29 crc kubenswrapper[4828]: I1129 07:21:29.924806 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rhxqt"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.011966 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012081 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58916077-c611-4cd6-9b53-b668fa2abb47-config\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012116 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovs-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012138 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovn-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012166 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvnqj\" (UniqueName: \"kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012189 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012217 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-combined-ca-bundle\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012291 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.012329 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkz5\" (UniqueName: \"kubernetes.io/projected/58916077-c611-4cd6-9b53-b668fa2abb47-kube-api-access-4bkz5\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.013684 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.015203 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58916077-c611-4cd6-9b53-b668fa2abb47-config\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.015461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovs-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.015533 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58916077-c611-4cd6-9b53-b668fa2abb47-ovn-rundir\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.016011 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.016492 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.021114 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.040773 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58916077-c611-4cd6-9b53-b668fa2abb47-combined-ca-bundle\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.045580 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvnqj\" (UniqueName: \"kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj\") pod \"dnsmasq-dns-7fd796d7df-ksfrm\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.117679 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.144156 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.149189 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.150753 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.155051 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.161242 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.219052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rppv\" (UniqueName: \"kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.219100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.219169 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.219230 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.219379 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.238647 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkz5\" (UniqueName: \"kubernetes.io/projected/58916077-c611-4cd6-9b53-b668fa2abb47-kube-api-access-4bkz5\") pod \"ovn-controller-metrics-rhxqt\" (UID: \"58916077-c611-4cd6-9b53-b668fa2abb47\") " pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.320758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.320866 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rppv\" (UniqueName: \"kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.320896 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.320923 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.320963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.322020 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.323178 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.324072 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.325144 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.354249 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rppv\" (UniqueName: \"kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv\") pod \"dnsmasq-dns-86db49b7ff-h2tdr\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.401903 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.491774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.521838 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rhxqt" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.644796 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.646798 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.649862 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vln8f" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.649955 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.659019 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.659439 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.681519 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735069 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5j75\" (UniqueName: \"kubernetes.io/projected/31df9f28-9df3-4686-9aa5-ea45706459fb-kube-api-access-m5j75\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-config\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735241 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735293 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735338 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-scripts\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735376 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.735403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837027 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-config\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837099 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837144 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837192 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-scripts\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837238 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837286 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837340 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5j75\" (UniqueName: \"kubernetes.io/projected/31df9f28-9df3-4686-9aa5-ea45706459fb-kube-api-access-m5j75\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.837998 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.838137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-config\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.838416 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31df9f28-9df3-4686-9aa5-ea45706459fb-scripts\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.841689 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.841870 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.845871 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31df9f28-9df3-4686-9aa5-ea45706459fb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.857944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5j75\" (UniqueName: \"kubernetes.io/projected/31df9f28-9df3-4686-9aa5-ea45706459fb-kube-api-access-m5j75\") pod \"ovn-northd-0\" (UID: \"31df9f28-9df3-4686-9aa5-ea45706459fb\") " pod="openstack/ovn-northd-0" Nov 29 07:21:30 crc kubenswrapper[4828]: I1129 07:21:30.977639 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:21:35 crc kubenswrapper[4828]: I1129 07:21:35.839806 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:21:35 crc kubenswrapper[4828]: I1129 07:21:35.964061 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.086732 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.092522 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rhxqt"] Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.175535 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:36 crc kubenswrapper[4828]: W1129 07:21:36.350190 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31df9f28_9df3_4686_9aa5_ea45706459fb.slice/crio-788019401c3363e162ea3467e02ba6d68e588d31999ae355f6ead61c0e68b3e3 WatchSource:0}: Error finding container 788019401c3363e162ea3467e02ba6d68e588d31999ae355f6ead61c0e68b3e3: Status 404 returned error can't find the container with id 788019401c3363e162ea3467e02ba6d68e588d31999ae355f6ead61c0e68b3e3 Nov 29 07:21:36 crc kubenswrapper[4828]: W1129 07:21:36.352936 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58916077_c611_4cd6_9b53_b668fa2abb47.slice/crio-77177fd6d7535dac548e3a62692f556fda2e2488a80e3d37854cb780774695aa WatchSource:0}: Error finding container 77177fd6d7535dac548e3a62692f556fda2e2488a80e3d37854cb780774695aa: Status 404 returned error can't find the container with id 77177fd6d7535dac548e3a62692f556fda2e2488a80e3d37854cb780774695aa Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.424673 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31df9f28-9df3-4686-9aa5-ea45706459fb","Type":"ContainerStarted","Data":"788019401c3363e162ea3467e02ba6d68e588d31999ae355f6ead61c0e68b3e3"} Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.426488 4828 generic.go:334] "Generic (PLEG): container finished" podID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerID="70b55c114966fbe3c8f47bf771c404e27e29911d9c5f9588692ba92d19002bd0" exitCode=0 Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.426570 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" event={"ID":"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e","Type":"ContainerDied","Data":"70b55c114966fbe3c8f47bf771c404e27e29911d9c5f9588692ba92d19002bd0"} Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.426603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" event={"ID":"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e","Type":"ContainerStarted","Data":"995a176e777e38c79867c326bfc2a44d677532a6f8a9ebef31cf2c464b50ae77"} Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.428755 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rhxqt" event={"ID":"58916077-c611-4cd6-9b53-b668fa2abb47","Type":"ContainerStarted","Data":"77177fd6d7535dac548e3a62692f556fda2e2488a80e3d37854cb780774695aa"} Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.430245 4828 generic.go:334] "Generic (PLEG): container finished" podID="667169da-8564-4c09-8be0-f50d1cce0888" containerID="638f1f5bd9fa537b2e9f80a61843c9473220654d1d4e36ed76ca15bbf6e3a56a" exitCode=0 Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.430314 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" event={"ID":"667169da-8564-4c09-8be0-f50d1cce0888","Type":"ContainerDied","Data":"638f1f5bd9fa537b2e9f80a61843c9473220654d1d4e36ed76ca15bbf6e3a56a"} Nov 29 07:21:36 crc kubenswrapper[4828]: I1129 07:21:36.432486 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" event={"ID":"5600555b-3085-4f9e-a31f-2caa3010ff5c","Type":"ContainerStarted","Data":"a31cf41a3c35fb8c40c12b1339fa67d82755b2905f511c876f8ab2c62314da5b"} Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.081401 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.158737 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jjb\" (UniqueName: \"kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb\") pod \"667169da-8564-4c09-8be0-f50d1cce0888\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.160016 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc\") pod \"667169da-8564-4c09-8be0-f50d1cce0888\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.160102 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config\") pod \"667169da-8564-4c09-8be0-f50d1cce0888\" (UID: \"667169da-8564-4c09-8be0-f50d1cce0888\") " Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.172480 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb" (OuterVolumeSpecName: "kube-api-access-b4jjb") pod "667169da-8564-4c09-8be0-f50d1cce0888" (UID: "667169da-8564-4c09-8be0-f50d1cce0888"). InnerVolumeSpecName "kube-api-access-b4jjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.177558 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config" (OuterVolumeSpecName: "config") pod "667169da-8564-4c09-8be0-f50d1cce0888" (UID: "667169da-8564-4c09-8be0-f50d1cce0888"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.178408 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "667169da-8564-4c09-8be0-f50d1cce0888" (UID: "667169da-8564-4c09-8be0-f50d1cce0888"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.262892 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.262933 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667169da-8564-4c09-8be0-f50d1cce0888-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.262946 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jjb\" (UniqueName: \"kubernetes.io/projected/667169da-8564-4c09-8be0-f50d1cce0888-kube-api-access-b4jjb\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.442687 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" event={"ID":"667169da-8564-4c09-8be0-f50d1cce0888","Type":"ContainerDied","Data":"4a5ec250acb21a4ad3f44080f536b84015b5fc233bb1ae8a1458218fb3fda182"} Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.442993 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gx7c7" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.443136 4828 scope.go:117] "RemoveContainer" containerID="638f1f5bd9fa537b2e9f80a61843c9473220654d1d4e36ed76ca15bbf6e3a56a" Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.445159 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerStarted","Data":"2a873c13c2f495a77812fb79e9150e2cc50d93ed2640dc7f8b77038240447f7f"} Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.515150 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:21:37 crc kubenswrapper[4828]: I1129 07:21:37.523821 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gx7c7"] Nov 29 07:21:38 crc kubenswrapper[4828]: I1129 07:21:38.453363 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f86097ba-a57f-4f34-8668-dc1daef612da","Type":"ContainerStarted","Data":"e47bca4d2bb935c5cbbd6d561443044ef8299ceba572316d6daa3aca871ee356"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.426023 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667169da-8564-4c09-8be0-f50d1cce0888" path="/var/lib/kubelet/pods/667169da-8564-4c09-8be0-f50d1cce0888/volumes" Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.464842 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerStarted","Data":"72b485348990f04a8df44040dbe807689a31c54bd4f558da7c6ae35ad7f0ab45"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.466717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f120af9-3005-49a2-9099-818ef49164dc","Type":"ContainerStarted","Data":"d2d66a3c80aa933b6e7c0b041d0b0cebe220a071fdca0c195f3ef78191e97747"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.466946 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.469586 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb49e4ad-de75-4a14-bbf3-f5bd0099add6","Type":"ContainerStarted","Data":"0d7f31c79a59a89d5111d03e1be1d47e46320833a3cf9557a4b6495557e7478c"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.471903 4828 generic.go:334] "Generic (PLEG): container finished" podID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerID="b1eb93b1dcee021e39765643a102e6966c8d35b4e8c0081cdc6160c3c3bb82a0" exitCode=0 Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.472017 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" event={"ID":"5600555b-3085-4f9e-a31f-2caa3010ff5c","Type":"ContainerDied","Data":"b1eb93b1dcee021e39765643a102e6966c8d35b4e8c0081cdc6160c3c3bb82a0"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.474280 4828 generic.go:334] "Generic (PLEG): container finished" podID="024941c4-acae-45c4-9347-3c981d7a0348" containerID="56538337d00100edada80475321af12b804132841638e2bcb476b23d0db9dd6e" exitCode=0 Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.474365 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" event={"ID":"024941c4-acae-45c4-9347-3c981d7a0348","Type":"ContainerDied","Data":"56538337d00100edada80475321af12b804132841638e2bcb476b23d0db9dd6e"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.476773 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" event={"ID":"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e","Type":"ContainerStarted","Data":"f30280e3af56f1d0ca9bdb6769fe40b8a6c68f867ea4691813686f5fc2d3cb79"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.477637 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.479531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rhxqt" event={"ID":"58916077-c611-4cd6-9b53-b668fa2abb47","Type":"ContainerStarted","Data":"9f4d634b66d74cbbd25fc587ac7660463c0a6bd3c6f4d833199c7ddf2233f8ea"} Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.528617 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" podStartSLOduration=9.528540992 podStartE2EDuration="9.528540992s" podCreationTimestamp="2025-11-29 07:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:39.517437508 +0000 UTC m=+1239.139513576" watchObservedRunningTime="2025-11-29 07:21:39.528540992 +0000 UTC m=+1239.150617050" Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.604129 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rhxqt" podStartSLOduration=10.604102276999999 podStartE2EDuration="10.604102277s" podCreationTimestamp="2025-11-29 07:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:39.600492625 +0000 UTC m=+1239.222568683" watchObservedRunningTime="2025-11-29 07:21:39.604102277 +0000 UTC m=+1239.226178335" Nov 29 07:21:39 crc kubenswrapper[4828]: I1129 07:21:39.644330 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.302818643 podStartE2EDuration="56.644312957s" podCreationTimestamp="2025-11-29 07:20:43 +0000 UTC" firstStartedPulling="2025-11-29 07:20:45.08690347 +0000 UTC m=+1184.708979528" lastFinishedPulling="2025-11-29 07:21:37.428397784 +0000 UTC m=+1237.050473842" observedRunningTime="2025-11-29 07:21:39.621072622 +0000 UTC m=+1239.243148700" watchObservedRunningTime="2025-11-29 07:21:39.644312957 +0000 UTC m=+1239.266389015" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.120650 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.312337 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc\") pod \"024941c4-acae-45c4-9347-3c981d7a0348\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.312442 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjrs4\" (UniqueName: \"kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4\") pod \"024941c4-acae-45c4-9347-3c981d7a0348\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.312495 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config\") pod \"024941c4-acae-45c4-9347-3c981d7a0348\" (UID: \"024941c4-acae-45c4-9347-3c981d7a0348\") " Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.316393 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4" (OuterVolumeSpecName: "kube-api-access-zjrs4") pod "024941c4-acae-45c4-9347-3c981d7a0348" (UID: "024941c4-acae-45c4-9347-3c981d7a0348"). InnerVolumeSpecName "kube-api-access-zjrs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.330008 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "024941c4-acae-45c4-9347-3c981d7a0348" (UID: "024941c4-acae-45c4-9347-3c981d7a0348"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.331178 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config" (OuterVolumeSpecName: "config") pod "024941c4-acae-45c4-9347-3c981d7a0348" (UID: "024941c4-acae-45c4-9347-3c981d7a0348"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.415173 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.415235 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjrs4\" (UniqueName: \"kubernetes.io/projected/024941c4-acae-45c4-9347-3c981d7a0348-kube-api-access-zjrs4\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.415257 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024941c4-acae-45c4-9347-3c981d7a0348-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.490580 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31df9f28-9df3-4686-9aa5-ea45706459fb","Type":"ContainerStarted","Data":"acb33f6e294d474c17a395920e383f830826930ef21218e3114888f4e556b401"} Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.491580 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" event={"ID":"024941c4-acae-45c4-9347-3c981d7a0348","Type":"ContainerDied","Data":"f7c77a8d0cc8ee1024b58218abd48ee235fc69c549783573a68246ff3e0794be"} Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.491608 4828 scope.go:117] "RemoveContainer" containerID="56538337d00100edada80475321af12b804132841638e2bcb476b23d0db9dd6e" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.491727 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smsnx" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.508738 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" event={"ID":"5600555b-3085-4f9e-a31f-2caa3010ff5c","Type":"ContainerStarted","Data":"06405360a519c0575700491e13e9c541a03ae4a77c8ac7d4451491052e0e19f6"} Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.510141 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.530227 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" podStartSLOduration=11.530204062 podStartE2EDuration="11.530204062s" podCreationTimestamp="2025-11-29 07:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:40.525195914 +0000 UTC m=+1240.147271972" watchObservedRunningTime="2025-11-29 07:21:40.530204062 +0000 UTC m=+1240.152280120" Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.693465 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:21:40 crc kubenswrapper[4828]: I1129 07:21:40.699456 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smsnx"] Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.421185 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024941c4-acae-45c4-9347-3c981d7a0348" path="/var/lib/kubelet/pods/024941c4-acae-45c4-9347-3c981d7a0348/volumes" Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.487042 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.487128 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.487201 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.488005 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.488073 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64" gracePeriod=600 Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.518945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31df9f28-9df3-4686-9aa5-ea45706459fb","Type":"ContainerStarted","Data":"31694b4c930f23ab2d777f1c76fbbce1daa63b9f5f9f46506a7dbcaadd0de189"} Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.519078 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 29 07:21:41 crc kubenswrapper[4828]: I1129 07:21:41.538941 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.820671739 podStartE2EDuration="11.538917632s" podCreationTimestamp="2025-11-29 07:21:30 +0000 UTC" firstStartedPulling="2025-11-29 07:21:36.401659142 +0000 UTC m=+1236.023735200" lastFinishedPulling="2025-11-29 07:21:40.119905035 +0000 UTC m=+1239.741981093" observedRunningTime="2025-11-29 07:21:41.536770517 +0000 UTC m=+1241.158846575" watchObservedRunningTime="2025-11-29 07:21:41.538917632 +0000 UTC m=+1241.160993710" Nov 29 07:21:42 crc kubenswrapper[4828]: I1129 07:21:42.540416 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64" exitCode=0 Nov 29 07:21:42 crc kubenswrapper[4828]: I1129 07:21:42.540470 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64"} Nov 29 07:21:42 crc kubenswrapper[4828]: I1129 07:21:42.541090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457"} Nov 29 07:21:42 crc kubenswrapper[4828]: I1129 07:21:42.541116 4828 scope.go:117] "RemoveContainer" containerID="e5d888f8d3600bd400d965197bc611e5fd51d1d573dbd26ed26d72bf3be20d36" Nov 29 07:21:44 crc kubenswrapper[4828]: I1129 07:21:44.162842 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.146407 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.493452 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.544669 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.581028 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="dnsmasq-dns" containerID="cri-o://06405360a519c0575700491e13e9c541a03ae4a77c8ac7d4451491052e0e19f6" gracePeriod=10 Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.995422 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:21:45 crc kubenswrapper[4828]: E1129 07:21:45.997733 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667169da-8564-4c09-8be0-f50d1cce0888" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.997777 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="667169da-8564-4c09-8be0-f50d1cce0888" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: E1129 07:21:45.997789 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024941c4-acae-45c4-9347-3c981d7a0348" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.997797 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="024941c4-acae-45c4-9347-3c981d7a0348" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.998009 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="024941c4-acae-45c4-9347-3c981d7a0348" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.998027 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="667169da-8564-4c09-8be0-f50d1cce0888" containerName="init" Nov 29 07:21:45 crc kubenswrapper[4828]: I1129 07:21:45.999021 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.017702 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.042673 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.042739 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.042848 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcl7z\" (UniqueName: \"kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.042910 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.042950 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.143963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.144114 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcl7z\" (UniqueName: \"kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.144171 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.144206 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.144263 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.145209 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.145209 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.145213 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.145313 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.178467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcl7z\" (UniqueName: \"kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z\") pod \"dnsmasq-dns-698758b865-q2prx\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.331308 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.589903 4828 generic.go:334] "Generic (PLEG): container finished" podID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerID="06405360a519c0575700491e13e9c541a03ae4a77c8ac7d4451491052e0e19f6" exitCode=0 Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.589945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" event={"ID":"5600555b-3085-4f9e-a31f-2caa3010ff5c","Type":"ContainerDied","Data":"06405360a519c0575700491e13e9c541a03ae4a77c8ac7d4451491052e0e19f6"} Nov 29 07:21:46 crc kubenswrapper[4828]: I1129 07:21:46.790824 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.043725 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.161442 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config\") pod \"5600555b-3085-4f9e-a31f-2caa3010ff5c\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.161512 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvnqj\" (UniqueName: \"kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj\") pod \"5600555b-3085-4f9e-a31f-2caa3010ff5c\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.161551 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb\") pod \"5600555b-3085-4f9e-a31f-2caa3010ff5c\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.161604 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc\") pod \"5600555b-3085-4f9e-a31f-2caa3010ff5c\" (UID: \"5600555b-3085-4f9e-a31f-2caa3010ff5c\") " Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.169225 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj" (OuterVolumeSpecName: "kube-api-access-zvnqj") pod "5600555b-3085-4f9e-a31f-2caa3010ff5c" (UID: "5600555b-3085-4f9e-a31f-2caa3010ff5c"). InnerVolumeSpecName "kube-api-access-zvnqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.202959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config" (OuterVolumeSpecName: "config") pod "5600555b-3085-4f9e-a31f-2caa3010ff5c" (UID: "5600555b-3085-4f9e-a31f-2caa3010ff5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.204349 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5600555b-3085-4f9e-a31f-2caa3010ff5c" (UID: "5600555b-3085-4f9e-a31f-2caa3010ff5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.217121 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5600555b-3085-4f9e-a31f-2caa3010ff5c" (UID: "5600555b-3085-4f9e-a31f-2caa3010ff5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.242204 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:21:47 crc kubenswrapper[4828]: E1129 07:21:47.242578 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="init" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.242594 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="init" Nov 29 07:21:47 crc kubenswrapper[4828]: E1129 07:21:47.242606 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="dnsmasq-dns" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.242613 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="dnsmasq-dns" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.242794 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" containerName="dnsmasq-dns" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.269844 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvnqj\" (UniqueName: \"kubernetes.io/projected/5600555b-3085-4f9e-a31f-2caa3010ff5c-kube-api-access-zvnqj\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.269894 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.269984 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.270003 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5600555b-3085-4f9e-a31f-2caa3010ff5c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.274002 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.277039 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.278074 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7mpqk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.278343 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.278524 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.278639 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.472506 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-cache\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.472565 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx25f\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-kube-api-access-dx25f\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.472675 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-lock\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.472775 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.472844 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575165 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-lock\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575351 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575434 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-cache\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575475 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx25f\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-kube-api-access-dx25f\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: E1129 07:21:47.575565 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:21:47 crc kubenswrapper[4828]: E1129 07:21:47.575605 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575677 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: E1129 07:21:47.575686 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:48.075645294 +0000 UTC m=+1247.697721352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.575998 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-cache\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.576336 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed93966d-a9d0-456c-b459-f06703deef71-lock\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.593399 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx25f\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-kube-api-access-dx25f\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.596074 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.603498 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.603799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ksfrm" event={"ID":"5600555b-3085-4f9e-a31f-2caa3010ff5c","Type":"ContainerDied","Data":"a31cf41a3c35fb8c40c12b1339fa67d82755b2905f511c876f8ab2c62314da5b"} Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.603877 4828 scope.go:117] "RemoveContainer" containerID="06405360a519c0575700491e13e9c541a03ae4a77c8ac7d4451491052e0e19f6" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.604974 4828 generic.go:334] "Generic (PLEG): container finished" podID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerID="682a70d914a94d3b46cae360223090d55304c98db24f93003e9debe2d196da63" exitCode=0 Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.605017 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-q2prx" event={"ID":"9ee7db07-ea2d-4f79-b976-70340967aa87","Type":"ContainerDied","Data":"682a70d914a94d3b46cae360223090d55304c98db24f93003e9debe2d196da63"} Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.605048 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-q2prx" event={"ID":"9ee7db07-ea2d-4f79-b976-70340967aa87","Type":"ContainerStarted","Data":"623e4ff534af4f07f3f4e6edcfa3bdea3cbfec9675c9a9184c6b0d109202a08b"} Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.634053 4828 scope.go:117] "RemoveContainer" containerID="b1eb93b1dcee021e39765643a102e6966c8d35b4e8c0081cdc6160c3c3bb82a0" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.650937 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.657257 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ksfrm"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.786794 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xb8hk"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.788187 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.794426 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.795065 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.799359 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.801727 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xb8hk"] Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.882877 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990678 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990738 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990783 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990908 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hcnc\" (UniqueName: \"kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990935 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.990981 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.991008 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:47 crc kubenswrapper[4828]: I1129 07:21:47.991885 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092347 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092408 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092443 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092495 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hcnc\" (UniqueName: \"kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092562 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.092599 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: E1129 07:21:48.092720 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:21:48 crc kubenswrapper[4828]: E1129 07:21:48.092756 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:21:48 crc kubenswrapper[4828]: E1129 07:21:48.092834 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:49.092809727 +0000 UTC m=+1248.714885805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.093164 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.094418 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.099554 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.108092 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.114779 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.128122 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hcnc\" (UniqueName: \"kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc\") pod \"swift-ring-rebalance-xb8hk\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.409508 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.616555 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-q2prx" event={"ID":"9ee7db07-ea2d-4f79-b976-70340967aa87","Type":"ContainerStarted","Data":"e11bd9624f55cc4017804f0f6964bce48f684d2fb0d376ff52f453ba1bd5506b"} Nov 29 07:21:48 crc kubenswrapper[4828]: I1129 07:21:48.831986 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xb8hk"] Nov 29 07:21:48 crc kubenswrapper[4828]: W1129 07:21:48.839373 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c12ad5a_3768_4925_84dc_83e3733f4a49.slice/crio-ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1 WatchSource:0}: Error finding container ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1: Status 404 returned error can't find the container with id ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1 Nov 29 07:21:49 crc kubenswrapper[4828]: I1129 07:21:49.115412 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:49 crc kubenswrapper[4828]: E1129 07:21:49.115619 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:21:49 crc kubenswrapper[4828]: E1129 07:21:49.115785 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:21:49 crc kubenswrapper[4828]: E1129 07:21:49.115845 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:51.115826603 +0000 UTC m=+1250.737902661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:21:49 crc kubenswrapper[4828]: I1129 07:21:49.429891 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5600555b-3085-4f9e-a31f-2caa3010ff5c" path="/var/lib/kubelet/pods/5600555b-3085-4f9e-a31f-2caa3010ff5c/volumes" Nov 29 07:21:49 crc kubenswrapper[4828]: I1129 07:21:49.624711 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xb8hk" event={"ID":"7c12ad5a-3768-4925-84dc-83e3733f4a49","Type":"ContainerStarted","Data":"ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1"} Nov 29 07:21:49 crc kubenswrapper[4828]: I1129 07:21:49.624870 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:49 crc kubenswrapper[4828]: I1129 07:21:49.646414 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-q2prx" podStartSLOduration=4.646396279 podStartE2EDuration="4.646396279s" podCreationTimestamp="2025-11-29 07:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:49.640056317 +0000 UTC m=+1249.262132365" watchObservedRunningTime="2025-11-29 07:21:49.646396279 +0000 UTC m=+1249.268472337" Nov 29 07:21:51 crc kubenswrapper[4828]: I1129 07:21:51.047932 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 29 07:21:51 crc kubenswrapper[4828]: I1129 07:21:51.144123 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:51 crc kubenswrapper[4828]: E1129 07:21:51.144609 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:21:51 crc kubenswrapper[4828]: E1129 07:21:51.144839 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:21:51 crc kubenswrapper[4828]: E1129 07:21:51.144904 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:55.144885311 +0000 UTC m=+1254.766961369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.719009 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.733910 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-twdtp" podUID="5197fd5f-121f-4085-8985-a8e31ee8f997" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:21:54 crc kubenswrapper[4828]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:21:54 crc kubenswrapper[4828]: > Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.735334 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hhg6w" Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.958062 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-twdtp-config-cgnt8"] Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.965143 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.967330 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 07:21:54 crc kubenswrapper[4828]: I1129 07:21:54.967376 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-twdtp-config-cgnt8"] Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.166949 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167025 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167091 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9886\" (UniqueName: \"kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167158 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.167189 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: E1129 07:21:55.167393 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:21:55 crc kubenswrapper[4828]: E1129 07:21:55.167419 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:21:55 crc kubenswrapper[4828]: E1129 07:21:55.167503 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:03.167458867 +0000 UTC m=+1262.789534975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.269548 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9886\" (UniqueName: \"kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.272777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273579 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273731 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273787 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273868 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.273957 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.274025 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.275710 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.289800 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9886\" (UniqueName: \"kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886\") pod \"ovn-controller-twdtp-config-cgnt8\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.335620 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:21:55 crc kubenswrapper[4828]: I1129 07:21:55.800310 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-twdtp-config-cgnt8"] Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.333441 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.402423 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.402697 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="dnsmasq-dns" containerID="cri-o://f30280e3af56f1d0ca9bdb6769fe40b8a6c68f867ea4691813686f5fc2d3cb79" gracePeriod=10 Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.682564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp-config-cgnt8" event={"ID":"dc073e6f-b0e4-4e74-8318-b34839f104ba","Type":"ContainerStarted","Data":"24d2f52cd03b629bb72fcbcd1773ec670cc527b5d21f529d8fbc9e69c9ea11db"} Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.685598 4828 generic.go:334] "Generic (PLEG): container finished" podID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerID="f30280e3af56f1d0ca9bdb6769fe40b8a6c68f867ea4691813686f5fc2d3cb79" exitCode=0 Nov 29 07:21:56 crc kubenswrapper[4828]: I1129 07:21:56.685669 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" event={"ID":"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e","Type":"ContainerDied","Data":"f30280e3af56f1d0ca9bdb6769fe40b8a6c68f867ea4691813686f5fc2d3cb79"} Nov 29 07:21:57 crc kubenswrapper[4828]: I1129 07:21:57.697170 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp-config-cgnt8" event={"ID":"dc073e6f-b0e4-4e74-8318-b34839f104ba","Type":"ContainerStarted","Data":"d569c82f140a10382afe21a3ef6873ad2dcc4b1f0f77aeaef12ce23b77df315a"} Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.707898 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" event={"ID":"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e","Type":"ContainerDied","Data":"995a176e777e38c79867c326bfc2a44d677532a6f8a9ebef31cf2c464b50ae77"} Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.708290 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995a176e777e38c79867c326bfc2a44d677532a6f8a9ebef31cf2c464b50ae77" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.712089 4828 generic.go:334] "Generic (PLEG): container finished" podID="dc073e6f-b0e4-4e74-8318-b34839f104ba" containerID="d569c82f140a10382afe21a3ef6873ad2dcc4b1f0f77aeaef12ce23b77df315a" exitCode=0 Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.712146 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp-config-cgnt8" event={"ID":"dc073e6f-b0e4-4e74-8318-b34839f104ba","Type":"ContainerDied","Data":"d569c82f140a10382afe21a3ef6873ad2dcc4b1f0f77aeaef12ce23b77df315a"} Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.766088 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.833073 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc\") pod \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.833127 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb\") pod \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.833147 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config\") pod \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.833252 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb\") pod \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.833421 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rppv\" (UniqueName: \"kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv\") pod \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\" (UID: \"5dfc5563-d6a9-4eb1-8ae8-0aa78200413e\") " Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.839114 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv" (OuterVolumeSpecName: "kube-api-access-6rppv") pod "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" (UID: "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e"). InnerVolumeSpecName "kube-api-access-6rppv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.872775 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" (UID: "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.873339 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" (UID: "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.875658 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config" (OuterVolumeSpecName: "config") pod "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" (UID: "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.882599 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" (UID: "5dfc5563-d6a9-4eb1-8ae8-0aa78200413e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.934944 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.935186 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rppv\" (UniqueName: \"kubernetes.io/projected/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-kube-api-access-6rppv\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.935251 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.935338 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:58 crc kubenswrapper[4828]: I1129 07:21:58.935405 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4828]: I1129 07:21:59.290977 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-twdtp" Nov 29 07:21:59 crc kubenswrapper[4828]: I1129 07:21:59.719068 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h2tdr" Nov 29 07:21:59 crc kubenswrapper[4828]: I1129 07:21:59.748369 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:21:59 crc kubenswrapper[4828]: I1129 07:21:59.759239 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h2tdr"] Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.088107 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155325 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155431 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9886\" (UniqueName: \"kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155476 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155495 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155539 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155555 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts\") pod \"dc073e6f-b0e4-4e74-8318-b34839f104ba\" (UID: \"dc073e6f-b0e4-4e74-8318-b34839f104ba\") " Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155541 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155596 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.155631 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run" (OuterVolumeSpecName: "var-run") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.156539 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.156749 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts" (OuterVolumeSpecName: "scripts") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.159520 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886" (OuterVolumeSpecName: "kube-api-access-n9886") pod "dc073e6f-b0e4-4e74-8318-b34839f104ba" (UID: "dc073e6f-b0e4-4e74-8318-b34839f104ba"). InnerVolumeSpecName "kube-api-access-n9886". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257296 4828 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257329 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257338 4828 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257353 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9886\" (UniqueName: \"kubernetes.io/projected/dc073e6f-b0e4-4e74-8318-b34839f104ba-kube-api-access-n9886\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257363 4828 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc073e6f-b0e4-4e74-8318-b34839f104ba-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.257371 4828 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dc073e6f-b0e4-4e74-8318-b34839f104ba-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.727332 4828 generic.go:334] "Generic (PLEG): container finished" podID="f86097ba-a57f-4f34-8668-dc1daef612da" containerID="e47bca4d2bb935c5cbbd6d561443044ef8299ceba572316d6daa3aca871ee356" exitCode=0 Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.727397 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f86097ba-a57f-4f34-8668-dc1daef612da","Type":"ContainerDied","Data":"e47bca4d2bb935c5cbbd6d561443044ef8299ceba572316d6daa3aca871ee356"} Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.728996 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xb8hk" event={"ID":"7c12ad5a-3768-4925-84dc-83e3733f4a49","Type":"ContainerStarted","Data":"5923a42128e2b716ef667fb650fd887264cd379cad48ccc2b6997f562f7ca8e9"} Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.731883 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-twdtp-config-cgnt8" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.731923 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-twdtp-config-cgnt8" event={"ID":"dc073e6f-b0e4-4e74-8318-b34839f104ba","Type":"ContainerDied","Data":"24d2f52cd03b629bb72fcbcd1773ec670cc527b5d21f529d8fbc9e69c9ea11db"} Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.731967 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d2f52cd03b629bb72fcbcd1773ec670cc527b5d21f529d8fbc9e69c9ea11db" Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.735203 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb49e4ad-de75-4a14-bbf3-f5bd0099add6" containerID="0d7f31c79a59a89d5111d03e1be1d47e46320833a3cf9557a4b6495557e7478c" exitCode=0 Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.735287 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb49e4ad-de75-4a14-bbf3-f5bd0099add6","Type":"ContainerDied","Data":"0d7f31c79a59a89d5111d03e1be1d47e46320833a3cf9557a4b6495557e7478c"} Nov 29 07:22:00 crc kubenswrapper[4828]: I1129 07:22:00.766815 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-xb8hk" podStartSLOduration=2.666365491 podStartE2EDuration="13.766779329s" podCreationTimestamp="2025-11-29 07:21:47 +0000 UTC" firstStartedPulling="2025-11-29 07:21:48.841608301 +0000 UTC m=+1248.463684359" lastFinishedPulling="2025-11-29 07:21:59.942022139 +0000 UTC m=+1259.564098197" observedRunningTime="2025-11-29 07:22:00.763490725 +0000 UTC m=+1260.385566783" watchObservedRunningTime="2025-11-29 07:22:00.766779329 +0000 UTC m=+1260.388855387" Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.205599 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-twdtp-config-cgnt8"] Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.222109 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-twdtp-config-cgnt8"] Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.424473 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" path="/var/lib/kubelet/pods/5dfc5563-d6a9-4eb1-8ae8-0aa78200413e/volumes" Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.425243 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc073e6f-b0e4-4e74-8318-b34839f104ba" path="/var/lib/kubelet/pods/dc073e6f-b0e4-4e74-8318-b34839f104ba/volumes" Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.747103 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f86097ba-a57f-4f34-8668-dc1daef612da","Type":"ContainerStarted","Data":"f56796e7f2ff864047bf7a68b45d1f403689494ada18da52099dac7fe97fb098"} Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.750450 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb49e4ad-de75-4a14-bbf3-f5bd0099add6","Type":"ContainerStarted","Data":"dad8abb6fa6e98b54cfd9f5dd9bbccd6b24c3195604e4eaf1f6c7beeed4031b7"} Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.780205 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=33.772603347 podStartE2EDuration="1m19.780185609s" podCreationTimestamp="2025-11-29 07:20:42 +0000 UTC" firstStartedPulling="2025-11-29 07:20:50.519886112 +0000 UTC m=+1190.141962170" lastFinishedPulling="2025-11-29 07:21:36.527468374 +0000 UTC m=+1236.149544432" observedRunningTime="2025-11-29 07:22:01.772179144 +0000 UTC m=+1261.394255212" watchObservedRunningTime="2025-11-29 07:22:01.780185609 +0000 UTC m=+1261.402261667" Nov 29 07:22:01 crc kubenswrapper[4828]: I1129 07:22:01.795763 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.158399281 podStartE2EDuration="1m21.795741148s" podCreationTimestamp="2025-11-29 07:20:40 +0000 UTC" firstStartedPulling="2025-11-29 07:20:42.888487625 +0000 UTC m=+1182.510563683" lastFinishedPulling="2025-11-29 07:21:36.525829492 +0000 UTC m=+1236.147905550" observedRunningTime="2025-11-29 07:22:01.790642567 +0000 UTC m=+1261.412718625" watchObservedRunningTime="2025-11-29 07:22:01.795741148 +0000 UTC m=+1261.417817206" Nov 29 07:22:02 crc kubenswrapper[4828]: I1129 07:22:02.057811 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 29 07:22:02 crc kubenswrapper[4828]: I1129 07:22:02.058087 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 29 07:22:03 crc kubenswrapper[4828]: I1129 07:22:03.224172 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:22:03 crc kubenswrapper[4828]: E1129 07:22:03.224433 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:22:03 crc kubenswrapper[4828]: E1129 07:22:03.224450 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:22:03 crc kubenswrapper[4828]: E1129 07:22:03.224497 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift podName:ed93966d-a9d0-456c-b459-f06703deef71 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:19.224480664 +0000 UTC m=+1278.846556722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift") pod "swift-storage-0" (UID: "ed93966d-a9d0-456c-b459-f06703deef71") : configmap "swift-ring-files" not found Nov 29 07:22:03 crc kubenswrapper[4828]: I1129 07:22:03.736251 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 29 07:22:03 crc kubenswrapper[4828]: I1129 07:22:03.736615 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 29 07:22:06 crc kubenswrapper[4828]: I1129 07:22:06.216572 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 29 07:22:06 crc kubenswrapper[4828]: I1129 07:22:06.290040 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 29 07:22:06 crc kubenswrapper[4828]: I1129 07:22:06.330552 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 29 07:22:06 crc kubenswrapper[4828]: I1129 07:22:06.411307 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 29 07:22:07 crc kubenswrapper[4828]: I1129 07:22:07.826223 4828 generic.go:334] "Generic (PLEG): container finished" podID="7c12ad5a-3768-4925-84dc-83e3733f4a49" containerID="5923a42128e2b716ef667fb650fd887264cd379cad48ccc2b6997f562f7ca8e9" exitCode=0 Nov 29 07:22:07 crc kubenswrapper[4828]: I1129 07:22:07.826301 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xb8hk" event={"ID":"7c12ad5a-3768-4925-84dc-83e3733f4a49","Type":"ContainerDied","Data":"5923a42128e2b716ef667fb650fd887264cd379cad48ccc2b6997f562f7ca8e9"} Nov 29 07:22:08 crc kubenswrapper[4828]: I1129 07:22:08.836994 4828 generic.go:334] "Generic (PLEG): container finished" podID="23acf022-f4ef-4a49-8771-e07792440c6c" containerID="2a873c13c2f495a77812fb79e9150e2cc50d93ed2640dc7f8b77038240447f7f" exitCode=0 Nov 29 07:22:08 crc kubenswrapper[4828]: I1129 07:22:08.837390 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerDied","Data":"2a873c13c2f495a77812fb79e9150e2cc50d93ed2640dc7f8b77038240447f7f"} Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.155235 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253414 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hcnc\" (UniqueName: \"kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253775 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253840 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253893 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253930 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.253980 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.254037 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift\") pod \"7c12ad5a-3768-4925-84dc-83e3733f4a49\" (UID: \"7c12ad5a-3768-4925-84dc-83e3733f4a49\") " Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.255333 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.255888 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.259129 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc" (OuterVolumeSpecName: "kube-api-access-7hcnc") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "kube-api-access-7hcnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.263232 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.276790 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.276797 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts" (OuterVolumeSpecName: "scripts") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.278160 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7c12ad5a-3768-4925-84dc-83e3733f4a49" (UID: "7c12ad5a-3768-4925-84dc-83e3733f4a49"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.355972 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hcnc\" (UniqueName: \"kubernetes.io/projected/7c12ad5a-3768-4925-84dc-83e3733f4a49-kube-api-access-7hcnc\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.356189 4828 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.356313 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.356405 4828 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c12ad5a-3768-4925-84dc-83e3733f4a49-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.356497 4828 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.356584 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c12ad5a-3768-4925-84dc-83e3733f4a49-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.357046 4828 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c12ad5a-3768-4925-84dc-83e3733f4a49-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.846375 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xb8hk" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.846362 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xb8hk" event={"ID":"7c12ad5a-3768-4925-84dc-83e3733f4a49","Type":"ContainerDied","Data":"ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1"} Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.846512 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada830fad3e06af1e56403e0d973e6a5f49eb830763be9af40bbcfc2fc0e62c1" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.848625 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerStarted","Data":"e71c12f86a4bc62d322d0dac35e19ea3054ec7117d11ee07d6f011064a993a79"} Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.848838 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:22:09 crc kubenswrapper[4828]: I1129 07:22:09.875391 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.364817094 podStartE2EDuration="1m30.87535049s" podCreationTimestamp="2025-11-29 07:20:39 +0000 UTC" firstStartedPulling="2025-11-29 07:20:41.069532318 +0000 UTC m=+1180.691608376" lastFinishedPulling="2025-11-29 07:21:35.580065714 +0000 UTC m=+1235.202141772" observedRunningTime="2025-11-29 07:22:09.871330877 +0000 UTC m=+1269.493406945" watchObservedRunningTime="2025-11-29 07:22:09.87535049 +0000 UTC m=+1269.497426548" Nov 29 07:22:11 crc kubenswrapper[4828]: I1129 07:22:11.865168 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerID="72b485348990f04a8df44040dbe807689a31c54bd4f558da7c6ae35ad7f0ab45" exitCode=0 Nov 29 07:22:11 crc kubenswrapper[4828]: I1129 07:22:11.865248 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerDied","Data":"72b485348990f04a8df44040dbe807689a31c54bd4f558da7c6ae35ad7f0ab45"} Nov 29 07:22:12 crc kubenswrapper[4828]: I1129 07:22:12.878149 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerStarted","Data":"6fd1a0c6e16682cee6ba1e0f5902985866f71e012b89cfbe224a9a750a2cfc86"} Nov 29 07:22:12 crc kubenswrapper[4828]: I1129 07:22:12.879948 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.473138 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.170009858 podStartE2EDuration="1m35.473111057s" podCreationTimestamp="2025-11-29 07:20:38 +0000 UTC" firstStartedPulling="2025-11-29 07:20:41.14853375 +0000 UTC m=+1180.770609808" lastFinishedPulling="2025-11-29 07:21:37.451634939 +0000 UTC m=+1237.073711007" observedRunningTime="2025-11-29 07:22:12.915773085 +0000 UTC m=+1272.537849133" watchObservedRunningTime="2025-11-29 07:22:13.473111057 +0000 UTC m=+1273.095187115" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.475734 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2f95-account-create-update-b9r9q"] Nov 29 07:22:13 crc kubenswrapper[4828]: E1129 07:22:13.476136 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c12ad5a-3768-4925-84dc-83e3733f4a49" containerName="swift-ring-rebalance" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476161 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c12ad5a-3768-4925-84dc-83e3733f4a49" containerName="swift-ring-rebalance" Nov 29 07:22:13 crc kubenswrapper[4828]: E1129 07:22:13.476181 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="dnsmasq-dns" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476189 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="dnsmasq-dns" Nov 29 07:22:13 crc kubenswrapper[4828]: E1129 07:22:13.476209 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="init" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476216 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="init" Nov 29 07:22:13 crc kubenswrapper[4828]: E1129 07:22:13.476229 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc073e6f-b0e4-4e74-8318-b34839f104ba" containerName="ovn-config" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476236 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc073e6f-b0e4-4e74-8318-b34839f104ba" containerName="ovn-config" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476432 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc073e6f-b0e4-4e74-8318-b34839f104ba" containerName="ovn-config" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476448 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfc5563-d6a9-4eb1-8ae8-0aa78200413e" containerName="dnsmasq-dns" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.476456 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c12ad5a-3768-4925-84dc-83e3733f4a49" containerName="swift-ring-rebalance" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.477081 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.479545 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.491220 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f95-account-create-update-b9r9q"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.533099 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dq84z"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.534524 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.546132 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dq84z"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.628936 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.629057 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.629135 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9p6\" (UniqueName: \"kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.629381 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4tx\" (UniqueName: \"kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.731677 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.732161 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.732193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg9p6\" (UniqueName: \"kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.732292 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4tx\" (UniqueName: \"kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.732869 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.733080 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.770793 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4bmqd"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.771831 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.779196 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4tx\" (UniqueName: \"kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx\") pod \"keystone-db-create-dq84z\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.779340 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg9p6\" (UniqueName: \"kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6\") pod \"keystone-2f95-account-create-update-b9r9q\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.796568 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4bmqd"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.801406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.858698 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.875000 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58d6-account-create-update-hg569"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.876293 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.888701 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.898476 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58d6-account-create-update-hg569"] Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.944504 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:13 crc kubenswrapper[4828]: I1129 07:22:13.944569 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8sxz\" (UniqueName: \"kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.046233 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.046695 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.046742 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8sxz\" (UniqueName: \"kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.046801 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9sdk\" (UniqueName: \"kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.047471 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.078865 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8sxz\" (UniqueName: \"kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz\") pod \"placement-db-create-4bmqd\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.151433 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.151540 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9sdk\" (UniqueName: \"kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.152685 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.183934 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9sdk\" (UniqueName: \"kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk\") pod \"placement-58d6-account-create-update-hg569\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.279414 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.286789 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f95-account-create-update-b9r9q"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.298653 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.337838 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c66nk"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.339387 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.347867 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c66nk"] Nov 29 07:22:14 crc kubenswrapper[4828]: W1129 07:22:14.353383 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9cfc4a_a81b_42f3_8ee1_6a97fd9ab4d8.slice/crio-4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51 WatchSource:0}: Error finding container 4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51: Status 404 returned error can't find the container with id 4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51 Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.420770 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2d11-account-create-update-7mbnk"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.423000 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.424773 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.449179 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2d11-account-create-update-7mbnk"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.457625 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.458034 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rq5d\" (UniqueName: \"kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.559488 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.559635 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rq5d\" (UniqueName: \"kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.559778 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfglx\" (UniqueName: \"kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.559834 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.560675 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.572076 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dq84z"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.590773 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rq5d\" (UniqueName: \"kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d\") pod \"glance-db-create-c66nk\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.661452 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfglx\" (UniqueName: \"kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.661533 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.662742 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.666081 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c66nk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.689169 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfglx\" (UniqueName: \"kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx\") pod \"glance-2d11-account-create-update-7mbnk\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.747803 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.919227 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dq84z" event={"ID":"5f93d5f3-01f0-4035-8d53-22594f87c388","Type":"ContainerStarted","Data":"6d12b73bb4584f32adb25708b3d2f6b76cd12be63e5507401f85ad4f6ad47d87"} Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.919630 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dq84z" event={"ID":"5f93d5f3-01f0-4035-8d53-22594f87c388","Type":"ContainerStarted","Data":"345eaea2cfd38d55ff104e0d174634941424b8d34cbe4546e01d9bf40e1b26dd"} Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.926076 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f95-account-create-update-b9r9q" event={"ID":"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8","Type":"ContainerStarted","Data":"17e5bfcfc9d65ef62cb3643b7962fe86bf515683d93a08d2bff23b99360bd7f2"} Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.926108 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f95-account-create-update-b9r9q" event={"ID":"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8","Type":"ContainerStarted","Data":"4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51"} Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.940676 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4bmqd"] Nov 29 07:22:14 crc kubenswrapper[4828]: I1129 07:22:14.985452 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58d6-account-create-update-hg569"] Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.006072 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c66nk"] Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.289992 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2d11-account-create-update-7mbnk"] Nov 29 07:22:15 crc kubenswrapper[4828]: W1129 07:22:15.293184 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca69608_e449_4f32_b236_6a59faa37c3f.slice/crio-86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178 WatchSource:0}: Error finding container 86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178: Status 404 returned error can't find the container with id 86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178 Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.935138 4828 generic.go:334] "Generic (PLEG): container finished" podID="5f93d5f3-01f0-4035-8d53-22594f87c388" containerID="6d12b73bb4584f32adb25708b3d2f6b76cd12be63e5507401f85ad4f6ad47d87" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.935250 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dq84z" event={"ID":"5f93d5f3-01f0-4035-8d53-22594f87c388","Type":"ContainerDied","Data":"6d12b73bb4584f32adb25708b3d2f6b76cd12be63e5507401f85ad4f6ad47d87"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.938156 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58d6-account-create-update-hg569" event={"ID":"55618ab7-858f-49e2-b3ff-259cf7eb69ed","Type":"ContainerStarted","Data":"22ce5b9192da0079d361063506d9bf650a257968c4b10b1ffe8ccb5db31359c9"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.938198 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58d6-account-create-update-hg569" event={"ID":"55618ab7-858f-49e2-b3ff-259cf7eb69ed","Type":"ContainerStarted","Data":"e30a9efea3247f049ab6a923bc282961a96123c2bc11c87d69dc52da1dae37b6"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.939731 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d92091c-a581-48d8-8e33-8f54e57a03a3" containerID="51cd7045af13ba1dca7f9fdde7ba2cc089d236ab61f1cab0656124cc2b6929b8" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.939802 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4bmqd" event={"ID":"5d92091c-a581-48d8-8e33-8f54e57a03a3","Type":"ContainerDied","Data":"51cd7045af13ba1dca7f9fdde7ba2cc089d236ab61f1cab0656124cc2b6929b8"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.939831 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4bmqd" event={"ID":"5d92091c-a581-48d8-8e33-8f54e57a03a3","Type":"ContainerStarted","Data":"0d759e5b4171f3274208f99f9a5e91792873a0c9bc3be5d4d62934d819ec1910"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.941253 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2d11-account-create-update-7mbnk" event={"ID":"bca69608-e449-4f32-b236-6a59faa37c3f","Type":"ContainerStarted","Data":"25b43cf2a6628b10eaeed71e2a5d11945dcf4ed71829c8d044f334d5acfdb19e"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.941332 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2d11-account-create-update-7mbnk" event={"ID":"bca69608-e449-4f32-b236-6a59faa37c3f","Type":"ContainerStarted","Data":"86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.942694 4828 generic.go:334] "Generic (PLEG): container finished" podID="946d34b3-2986-4833-bd08-b898ddd4fcd7" containerID="f264c7ec47625ca59ddd10ab8843e20108222d4a13a9a5ff6c6ee913ffe21e6a" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.942730 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c66nk" event={"ID":"946d34b3-2986-4833-bd08-b898ddd4fcd7","Type":"ContainerDied","Data":"f264c7ec47625ca59ddd10ab8843e20108222d4a13a9a5ff6c6ee913ffe21e6a"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.942772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c66nk" event={"ID":"946d34b3-2986-4833-bd08-b898ddd4fcd7","Type":"ContainerStarted","Data":"9510b619ca2ddb6e7096dd3f0f258430aaf36dea706927e2fd51470d94c4b91a"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.944111 4828 generic.go:334] "Generic (PLEG): container finished" podID="2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" containerID="17e5bfcfc9d65ef62cb3643b7962fe86bf515683d93a08d2bff23b99360bd7f2" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.944148 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f95-account-create-update-b9r9q" event={"ID":"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8","Type":"ContainerDied","Data":"17e5bfcfc9d65ef62cb3643b7962fe86bf515683d93a08d2bff23b99360bd7f2"} Nov 29 07:22:15 crc kubenswrapper[4828]: I1129 07:22:15.992150 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2d11-account-create-update-7mbnk" podStartSLOduration=1.992125961 podStartE2EDuration="1.992125961s" podCreationTimestamp="2025-11-29 07:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:15.989939915 +0000 UTC m=+1275.612015973" watchObservedRunningTime="2025-11-29 07:22:15.992125961 +0000 UTC m=+1275.614202019" Nov 29 07:22:16 crc kubenswrapper[4828]: I1129 07:22:16.023616 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58d6-account-create-update-hg569" podStartSLOduration=3.023597717 podStartE2EDuration="3.023597717s" podCreationTimestamp="2025-11-29 07:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:16.020652122 +0000 UTC m=+1275.642728180" watchObservedRunningTime="2025-11-29 07:22:16.023597717 +0000 UTC m=+1275.645673765" Nov 29 07:22:16 crc kubenswrapper[4828]: I1129 07:22:16.954007 4828 generic.go:334] "Generic (PLEG): container finished" podID="bca69608-e449-4f32-b236-6a59faa37c3f" containerID="25b43cf2a6628b10eaeed71e2a5d11945dcf4ed71829c8d044f334d5acfdb19e" exitCode=0 Nov 29 07:22:16 crc kubenswrapper[4828]: I1129 07:22:16.954092 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2d11-account-create-update-7mbnk" event={"ID":"bca69608-e449-4f32-b236-6a59faa37c3f","Type":"ContainerDied","Data":"25b43cf2a6628b10eaeed71e2a5d11945dcf4ed71829c8d044f334d5acfdb19e"} Nov 29 07:22:16 crc kubenswrapper[4828]: I1129 07:22:16.957424 4828 generic.go:334] "Generic (PLEG): container finished" podID="55618ab7-858f-49e2-b3ff-259cf7eb69ed" containerID="22ce5b9192da0079d361063506d9bf650a257968c4b10b1ffe8ccb5db31359c9" exitCode=0 Nov 29 07:22:16 crc kubenswrapper[4828]: I1129 07:22:16.957534 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58d6-account-create-update-hg569" event={"ID":"55618ab7-858f-49e2-b3ff-259cf7eb69ed","Type":"ContainerDied","Data":"22ce5b9192da0079d361063506d9bf650a257968c4b10b1ffe8ccb5db31359c9"} Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.401614 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c66nk" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.520855 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.522753 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rq5d\" (UniqueName: \"kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d\") pod \"946d34b3-2986-4833-bd08-b898ddd4fcd7\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.522911 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts\") pod \"946d34b3-2986-4833-bd08-b898ddd4fcd7\" (UID: \"946d34b3-2986-4833-bd08-b898ddd4fcd7\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.523973 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "946d34b3-2986-4833-bd08-b898ddd4fcd7" (UID: "946d34b3-2986-4833-bd08-b898ddd4fcd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.530393 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.531608 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d" (OuterVolumeSpecName: "kube-api-access-5rq5d") pod "946d34b3-2986-4833-bd08-b898ddd4fcd7" (UID: "946d34b3-2986-4833-bd08-b898ddd4fcd7"). InnerVolumeSpecName "kube-api-access-5rq5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.574430 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.624719 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts\") pod \"5d92091c-a581-48d8-8e33-8f54e57a03a3\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.624970 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts\") pod \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.625101 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg9p6\" (UniqueName: \"kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6\") pod \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\" (UID: \"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.625212 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d92091c-a581-48d8-8e33-8f54e57a03a3" (UID: "5d92091c-a581-48d8-8e33-8f54e57a03a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.625454 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8sxz\" (UniqueName: \"kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz\") pod \"5d92091c-a581-48d8-8e33-8f54e57a03a3\" (UID: \"5d92091c-a581-48d8-8e33-8f54e57a03a3\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.625668 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" (UID: "2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.626096 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d92091c-a581-48d8-8e33-8f54e57a03a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.626193 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.626301 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rq5d\" (UniqueName: \"kubernetes.io/projected/946d34b3-2986-4833-bd08-b898ddd4fcd7-kube-api-access-5rq5d\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.626399 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d34b3-2986-4833-bd08-b898ddd4fcd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.628158 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6" (OuterVolumeSpecName: "kube-api-access-bg9p6") pod "2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" (UID: "2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8"). InnerVolumeSpecName "kube-api-access-bg9p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.629074 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz" (OuterVolumeSpecName: "kube-api-access-d8sxz") pod "5d92091c-a581-48d8-8e33-8f54e57a03a3" (UID: "5d92091c-a581-48d8-8e33-8f54e57a03a3"). InnerVolumeSpecName "kube-api-access-d8sxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.727724 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts\") pod \"5f93d5f3-01f0-4035-8d53-22594f87c388\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.727843 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx4tx\" (UniqueName: \"kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx\") pod \"5f93d5f3-01f0-4035-8d53-22594f87c388\" (UID: \"5f93d5f3-01f0-4035-8d53-22594f87c388\") " Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.728126 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f93d5f3-01f0-4035-8d53-22594f87c388" (UID: "5f93d5f3-01f0-4035-8d53-22594f87c388"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.728235 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg9p6\" (UniqueName: \"kubernetes.io/projected/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8-kube-api-access-bg9p6\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.728256 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f93d5f3-01f0-4035-8d53-22594f87c388-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.728280 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8sxz\" (UniqueName: \"kubernetes.io/projected/5d92091c-a581-48d8-8e33-8f54e57a03a3-kube-api-access-d8sxz\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.731404 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx" (OuterVolumeSpecName: "kube-api-access-fx4tx") pod "5f93d5f3-01f0-4035-8d53-22594f87c388" (UID: "5f93d5f3-01f0-4035-8d53-22594f87c388"). InnerVolumeSpecName "kube-api-access-fx4tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.830135 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx4tx\" (UniqueName: \"kubernetes.io/projected/5f93d5f3-01f0-4035-8d53-22594f87c388-kube-api-access-fx4tx\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.967143 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f95-account-create-update-b9r9q" event={"ID":"2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8","Type":"ContainerDied","Data":"4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51"} Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.967207 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea4a97176bd0df255b8528a9ee62fc221b43ccebfce88bf26bda9ddca35ee51" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.967171 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f95-account-create-update-b9r9q" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.968302 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dq84z" event={"ID":"5f93d5f3-01f0-4035-8d53-22594f87c388","Type":"ContainerDied","Data":"345eaea2cfd38d55ff104e0d174634941424b8d34cbe4546e01d9bf40e1b26dd"} Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.968322 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345eaea2cfd38d55ff104e0d174634941424b8d34cbe4546e01d9bf40e1b26dd" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.968363 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dq84z" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.970379 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4bmqd" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.970945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4bmqd" event={"ID":"5d92091c-a581-48d8-8e33-8f54e57a03a3","Type":"ContainerDied","Data":"0d759e5b4171f3274208f99f9a5e91792873a0c9bc3be5d4d62934d819ec1910"} Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.971054 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d759e5b4171f3274208f99f9a5e91792873a0c9bc3be5d4d62934d819ec1910" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.972485 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c66nk" event={"ID":"946d34b3-2986-4833-bd08-b898ddd4fcd7","Type":"ContainerDied","Data":"9510b619ca2ddb6e7096dd3f0f258430aaf36dea706927e2fd51470d94c4b91a"} Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.972522 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9510b619ca2ddb6e7096dd3f0f258430aaf36dea706927e2fd51470d94c4b91a" Nov 29 07:22:17 crc kubenswrapper[4828]: I1129 07:22:17.972735 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c66nk" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.233674 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.287222 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.337044 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts\") pod \"bca69608-e449-4f32-b236-6a59faa37c3f\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.337148 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfglx\" (UniqueName: \"kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx\") pod \"bca69608-e449-4f32-b236-6a59faa37c3f\" (UID: \"bca69608-e449-4f32-b236-6a59faa37c3f\") " Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.338722 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bca69608-e449-4f32-b236-6a59faa37c3f" (UID: "bca69608-e449-4f32-b236-6a59faa37c3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.342437 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx" (OuterVolumeSpecName: "kube-api-access-vfglx") pod "bca69608-e449-4f32-b236-6a59faa37c3f" (UID: "bca69608-e449-4f32-b236-6a59faa37c3f"). InnerVolumeSpecName "kube-api-access-vfglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.438815 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9sdk\" (UniqueName: \"kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk\") pod \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.439024 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts\") pod \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\" (UID: \"55618ab7-858f-49e2-b3ff-259cf7eb69ed\") " Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.439403 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca69608-e449-4f32-b236-6a59faa37c3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.439422 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfglx\" (UniqueName: \"kubernetes.io/projected/bca69608-e449-4f32-b236-6a59faa37c3f-kube-api-access-vfglx\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.439815 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55618ab7-858f-49e2-b3ff-259cf7eb69ed" (UID: "55618ab7-858f-49e2-b3ff-259cf7eb69ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.443690 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk" (OuterVolumeSpecName: "kube-api-access-m9sdk") pod "55618ab7-858f-49e2-b3ff-259cf7eb69ed" (UID: "55618ab7-858f-49e2-b3ff-259cf7eb69ed"). InnerVolumeSpecName "kube-api-access-m9sdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.542818 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55618ab7-858f-49e2-b3ff-259cf7eb69ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.542889 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9sdk\" (UniqueName: \"kubernetes.io/projected/55618ab7-858f-49e2-b3ff-259cf7eb69ed-kube-api-access-m9sdk\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.981197 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2d11-account-create-update-7mbnk" event={"ID":"bca69608-e449-4f32-b236-6a59faa37c3f","Type":"ContainerDied","Data":"86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178"} Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.981281 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86be53176435eafb8b23acd5d54562f50faee2ad5ca273dd6394a1efb7c20178" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.981218 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2d11-account-create-update-7mbnk" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.982712 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58d6-account-create-update-hg569" event={"ID":"55618ab7-858f-49e2-b3ff-259cf7eb69ed","Type":"ContainerDied","Data":"e30a9efea3247f049ab6a923bc282961a96123c2bc11c87d69dc52da1dae37b6"} Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.982741 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30a9efea3247f049ab6a923bc282961a96123c2bc11c87d69dc52da1dae37b6" Nov 29 07:22:18 crc kubenswrapper[4828]: I1129 07:22:18.982769 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58d6-account-create-update-hg569" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.257795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.269994 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed93966d-a9d0-456c-b459-f06703deef71-etc-swift\") pod \"swift-storage-0\" (UID: \"ed93966d-a9d0-456c-b459-f06703deef71\") " pod="openstack/swift-storage-0" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.399072 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.655492 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wc5ng"] Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656186 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f93d5f3-01f0-4035-8d53-22594f87c388" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656202 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f93d5f3-01f0-4035-8d53-22594f87c388" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656215 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d92091c-a581-48d8-8e33-8f54e57a03a3" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656221 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d92091c-a581-48d8-8e33-8f54e57a03a3" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656230 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d34b3-2986-4833-bd08-b898ddd4fcd7" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656237 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d34b3-2986-4833-bd08-b898ddd4fcd7" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656246 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656252 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656465 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55618ab7-858f-49e2-b3ff-259cf7eb69ed" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656474 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="55618ab7-858f-49e2-b3ff-259cf7eb69ed" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: E1129 07:22:19.656483 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca69608-e449-4f32-b236-6a59faa37c3f" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656489 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca69608-e449-4f32-b236-6a59faa37c3f" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656682 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="946d34b3-2986-4833-bd08-b898ddd4fcd7" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656698 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="55618ab7-858f-49e2-b3ff-259cf7eb69ed" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656709 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d92091c-a581-48d8-8e33-8f54e57a03a3" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656718 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f93d5f3-01f0-4035-8d53-22594f87c388" containerName="mariadb-database-create" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656727 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.656737 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca69608-e449-4f32-b236-6a59faa37c3f" containerName="mariadb-account-create-update" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.657288 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.659156 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.659212 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghtfr" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.674186 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wc5ng"] Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.765399 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdpb\" (UniqueName: \"kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.765532 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.765563 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.765765 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.867872 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.867924 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.867980 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.868050 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgdpb\" (UniqueName: \"kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.873635 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.873638 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.874278 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.891154 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgdpb\" (UniqueName: \"kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb\") pod \"glance-db-sync-wc5ng\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.965910 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:22:19 crc kubenswrapper[4828]: W1129 07:22:19.966724 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded93966d_a9d0_456c_b459_f06703deef71.slice/crio-1fa5d7f16816ebc08b07a5ce3b75de45a6e97eba4a75caf34e83e74fc8edeb26 WatchSource:0}: Error finding container 1fa5d7f16816ebc08b07a5ce3b75de45a6e97eba4a75caf34e83e74fc8edeb26: Status 404 returned error can't find the container with id 1fa5d7f16816ebc08b07a5ce3b75de45a6e97eba4a75caf34e83e74fc8edeb26 Nov 29 07:22:19 crc kubenswrapper[4828]: I1129 07:22:19.977591 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wc5ng" Nov 29 07:22:20 crc kubenswrapper[4828]: I1129 07:22:20.003457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"1fa5d7f16816ebc08b07a5ce3b75de45a6e97eba4a75caf34e83e74fc8edeb26"} Nov 29 07:22:20 crc kubenswrapper[4828]: I1129 07:22:20.258363 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wc5ng"] Nov 29 07:22:20 crc kubenswrapper[4828]: W1129 07:22:20.266156 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e2b60cb_6670_4720_8aaf_3db7307905b0.slice/crio-bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c WatchSource:0}: Error finding container bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c: Status 404 returned error can't find the container with id bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c Nov 29 07:22:20 crc kubenswrapper[4828]: I1129 07:22:20.613192 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:22:21 crc kubenswrapper[4828]: I1129 07:22:21.021777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wc5ng" event={"ID":"5e2b60cb-6670-4720-8aaf-3db7307905b0","Type":"ContainerStarted","Data":"bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c"} Nov 29 07:22:24 crc kubenswrapper[4828]: I1129 07:22:24.045132 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"88e0762e2cdf46a3ad17b4353ee130621e4b1c2ab6b91ce0a5cc02bdaf15c08b"} Nov 29 07:22:24 crc kubenswrapper[4828]: I1129 07:22:24.045694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"7d965ffcf5c1e6fe77c31409756aef65199877e011f4820098abb8fe42910f85"} Nov 29 07:22:25 crc kubenswrapper[4828]: I1129 07:22:25.056555 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"8cadfc0c658abae90eedc528084d8b0636f3e0f398087e4e1869668b87ce55f9"} Nov 29 07:22:25 crc kubenswrapper[4828]: I1129 07:22:25.056829 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"bc1573e789ee31f8cf161ef1649768360a2d6cd9f7cdc52ba6b3f7161cc0f197"} Nov 29 07:22:30 crc kubenswrapper[4828]: I1129 07:22:30.594491 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:22:30 crc kubenswrapper[4828]: I1129 07:22:30.935194 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-gn86f"] Nov 29 07:22:30 crc kubenswrapper[4828]: I1129 07:22:30.936669 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gn86f" Nov 29 07:22:30 crc kubenswrapper[4828]: I1129 07:22:30.951328 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gn86f"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.000402 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.000646 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7wm\" (UniqueName: \"kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.102488 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.102565 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7wm\" (UniqueName: \"kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.102504 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nbs6p"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.103158 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.104128 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.129126 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-216d-account-create-update-znwgr"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.130391 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.134742 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.137171 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7wm\" (UniqueName: \"kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm\") pod \"heat-db-create-gn86f\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.143066 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nbs6p"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.149208 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-216d-account-create-update-znwgr"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.207885 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4695\" (UniqueName: \"kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.207997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjf6\" (UniqueName: \"kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.208100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.208210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.250345 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zjlgk"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.251739 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.265167 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gn86f" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.279228 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zjlgk"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320571 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320697 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4695\" (UniqueName: \"kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjf6\" (UniqueName: \"kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320816 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhpl\" (UniqueName: \"kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.320843 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.321830 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.321893 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.326858 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9lfbf"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.328315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.333389 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2566-account-create-update-m95nq"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.334706 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.334766 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.337870 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.338587 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5wkrh" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.342881 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.353154 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lfbf"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.361069 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.365513 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2566-account-create-update-m95nq"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.367194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4695\" (UniqueName: \"kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695\") pod \"barbican-db-create-nbs6p\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.368847 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjf6\" (UniqueName: \"kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6\") pod \"heat-216d-account-create-update-znwgr\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.447290 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460674 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460747 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460805 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwg4\" (UniqueName: \"kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460864 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460917 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhpl\" (UniqueName: \"kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460961 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.460994 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhsps\" (UniqueName: \"kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.462130 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.494167 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhpl\" (UniqueName: \"kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl\") pod \"cinder-db-create-zjlgk\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.505036 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.559603 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6816-account-create-update-m6qkv"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.561036 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.563568 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.564107 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.564523 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhsps\" (UniqueName: \"kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.564986 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.565595 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwg4\" (UniqueName: \"kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.567465 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.569097 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6816-account-create-update-m6qkv"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.572249 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.575183 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.587720 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.595010 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.596174 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwg4\" (UniqueName: \"kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4\") pod \"barbican-2566-account-create-update-m95nq\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.616015 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhsps\" (UniqueName: \"kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps\") pod \"keystone-db-sync-9lfbf\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.624884 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bd2rb"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.626258 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.631443 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bd2rb"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.662486 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.666752 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbt58\" (UniqueName: \"kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.666872 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.768249 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbt58\" (UniqueName: \"kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.768326 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.768350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clcq\" (UniqueName: \"kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.768443 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.769158 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.791310 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.791663 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbt58\" (UniqueName: \"kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58\") pod \"cinder-6816-account-create-update-m6qkv\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.851140 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1206-account-create-update-gbdkb"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.852135 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.859852 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.863899 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1206-account-create-update-gbdkb"] Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.885424 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.885477 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2clcq\" (UniqueName: \"kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.886239 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.901584 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2clcq\" (UniqueName: \"kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq\") pod \"neutron-db-create-bd2rb\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.929831 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.949437 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.986942 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbks\" (UniqueName: \"kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:31 crc kubenswrapper[4828]: I1129 07:22:31.987025 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:32 crc kubenswrapper[4828]: I1129 07:22:32.089085 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmbks\" (UniqueName: \"kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:32 crc kubenswrapper[4828]: I1129 07:22:32.089167 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:32 crc kubenswrapper[4828]: I1129 07:22:32.090066 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:32 crc kubenswrapper[4828]: I1129 07:22:32.106371 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmbks\" (UniqueName: \"kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks\") pod \"neutron-1206-account-create-update-gbdkb\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:32 crc kubenswrapper[4828]: I1129 07:22:32.170796 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:38 crc kubenswrapper[4828]: E1129 07:22:38.189293 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 29 07:22:38 crc kubenswrapper[4828]: E1129 07:22:38.190236 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgdpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wc5ng_openstack(5e2b60cb-6670-4720-8aaf-3db7307905b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:38 crc kubenswrapper[4828]: E1129 07:22:38.191524 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wc5ng" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.612439 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nbs6p"] Nov 29 07:22:38 crc kubenswrapper[4828]: W1129 07:22:38.615232 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cdeb5e1_cc93_4735_9968_0643cf836b22.slice/crio-0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2 WatchSource:0}: Error finding container 0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2: Status 404 returned error can't find the container with id 0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2 Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.831448 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zjlgk"] Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.850213 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gn86f"] Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.861046 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lfbf"] Nov 29 07:22:38 crc kubenswrapper[4828]: W1129 07:22:38.868371 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod178b5736_03a6_439e_b1b8_b123b85d1876.slice/crio-eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86 WatchSource:0}: Error finding container eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86: Status 404 returned error can't find the container with id eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86 Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.871921 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1206-account-create-update-gbdkb"] Nov 29 07:22:38 crc kubenswrapper[4828]: W1129 07:22:38.874955 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod927d823f_6545_47a6_b9d6_3437c4f3d493.slice/crio-24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598 WatchSource:0}: Error finding container 24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598: Status 404 returned error can't find the container with id 24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598 Nov 29 07:22:38 crc kubenswrapper[4828]: I1129 07:22:38.893840 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6816-account-create-update-m6qkv"] Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.007926 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-216d-account-create-update-znwgr"] Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.014907 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2566-account-create-update-m95nq"] Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.027639 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bd2rb"] Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.182495 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjlgk" event={"ID":"fc59d5d0-a534-49b4-977f-c0c787929ad7","Type":"ContainerStarted","Data":"3be6f09862d654d9f66a07ec86788b24fa8e8595fa0a97c4e029d20bc04ef090"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.182835 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjlgk" event={"ID":"fc59d5d0-a534-49b4-977f-c0c787929ad7","Type":"ContainerStarted","Data":"043dbde4f2f18573a6ddf35935c1925ca6071e78c0309b174c9925aab4d14724"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.184746 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfbf" event={"ID":"ea442090-ae24-451d-ba14-2d18dbb4076a","Type":"ContainerStarted","Data":"453dc845c8fc6b765b25667dea729de2d290df34b1f05e23ed5616218ccca600"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.187574 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6816-account-create-update-m6qkv" event={"ID":"ee34a7f9-16ab-4a44-855c-ed865e5d0331","Type":"ContainerStarted","Data":"f1c7b04552445915b4d0f166325def5928554df05ef07eff6a0f952ef99c16ae"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.191091 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gn86f" event={"ID":"178b5736-03a6-439e-b1b8-b123b85d1876","Type":"ContainerStarted","Data":"ffca92603e4c81546577b9e30b42b6d2d24698cd204c6dab8888909bc818a053"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.191161 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gn86f" event={"ID":"178b5736-03a6-439e-b1b8-b123b85d1876","Type":"ContainerStarted","Data":"eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.193331 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1206-account-create-update-gbdkb" event={"ID":"927d823f-6545-47a6-b9d6-3437c4f3d493","Type":"ContainerStarted","Data":"24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.196036 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nbs6p" event={"ID":"2cdeb5e1-cc93-4735-9968-0643cf836b22","Type":"ContainerStarted","Data":"b8081ca42dc7802062485c3fb6364babee80b13b61ece396234ecb3eea7d3a09"} Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.196117 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nbs6p" event={"ID":"2cdeb5e1-cc93-4735-9968-0643cf836b22","Type":"ContainerStarted","Data":"0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2"} Nov 29 07:22:39 crc kubenswrapper[4828]: E1129 07:22:39.199866 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-wc5ng" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.209660 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-zjlgk" podStartSLOduration=8.209632698 podStartE2EDuration="8.209632698s" podCreationTimestamp="2025-11-29 07:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:39.208091789 +0000 UTC m=+1298.830167847" watchObservedRunningTime="2025-11-29 07:22:39.209632698 +0000 UTC m=+1298.831708746" Nov 29 07:22:39 crc kubenswrapper[4828]: I1129 07:22:39.245618 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-gn86f" podStartSLOduration=9.245586713 podStartE2EDuration="9.245586713s" podCreationTimestamp="2025-11-29 07:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:39.239021807 +0000 UTC m=+1298.861097875" watchObservedRunningTime="2025-11-29 07:22:39.245586713 +0000 UTC m=+1298.867662771" Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.214345 4828 generic.go:334] "Generic (PLEG): container finished" podID="2cdeb5e1-cc93-4735-9968-0643cf836b22" containerID="b8081ca42dc7802062485c3fb6364babee80b13b61ece396234ecb3eea7d3a09" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.214449 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nbs6p" event={"ID":"2cdeb5e1-cc93-4735-9968-0643cf836b22","Type":"ContainerDied","Data":"b8081ca42dc7802062485c3fb6364babee80b13b61ece396234ecb3eea7d3a09"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.227730 4828 generic.go:334] "Generic (PLEG): container finished" podID="4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" containerID="d44685b1055ff18bc37ac5c248c7da04f81535f9a2e58364279f9867de055285" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.228176 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-216d-account-create-update-znwgr" event={"ID":"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8","Type":"ContainerDied","Data":"d44685b1055ff18bc37ac5c248c7da04f81535f9a2e58364279f9867de055285"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.228240 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-216d-account-create-update-znwgr" event={"ID":"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8","Type":"ContainerStarted","Data":"83ed4d879b39f01bb64eec5ffab4e41bab3d94ae5a02b4316e99bc87440804d2"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.232227 4828 generic.go:334] "Generic (PLEG): container finished" podID="fc59d5d0-a534-49b4-977f-c0c787929ad7" containerID="3be6f09862d654d9f66a07ec86788b24fa8e8595fa0a97c4e029d20bc04ef090" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.232334 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjlgk" event={"ID":"fc59d5d0-a534-49b4-977f-c0c787929ad7","Type":"ContainerDied","Data":"3be6f09862d654d9f66a07ec86788b24fa8e8595fa0a97c4e029d20bc04ef090"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.234009 4828 generic.go:334] "Generic (PLEG): container finished" podID="b394d40e-1759-4220-a59f-9d5d90957634" containerID="39ba9bd8dce5a86dfb422cdb1d4aecad5a12bd916cf6f4fb5a469a739c2cad21" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.234058 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd2rb" event={"ID":"b394d40e-1759-4220-a59f-9d5d90957634","Type":"ContainerDied","Data":"39ba9bd8dce5a86dfb422cdb1d4aecad5a12bd916cf6f4fb5a469a739c2cad21"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.234114 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd2rb" event={"ID":"b394d40e-1759-4220-a59f-9d5d90957634","Type":"ContainerStarted","Data":"7dffb6aeaedd95f423cdd6e2a588d016c6afe809c66cc6386e796e043304bb99"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.235527 4828 generic.go:334] "Generic (PLEG): container finished" podID="178b5736-03a6-439e-b1b8-b123b85d1876" containerID="ffca92603e4c81546577b9e30b42b6d2d24698cd204c6dab8888909bc818a053" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.235613 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gn86f" event={"ID":"178b5736-03a6-439e-b1b8-b123b85d1876","Type":"ContainerDied","Data":"ffca92603e4c81546577b9e30b42b6d2d24698cd204c6dab8888909bc818a053"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.238704 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"726ca70f986c53033a6edb1142fc12f02c3ea70b60702f373fe8d310a17b1a61"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.238869 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"0c14a2fa4f07e72051411a428f2631170b1df2d0a24a7f1eedd05a61f7b8489e"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.240238 4828 generic.go:334] "Generic (PLEG): container finished" podID="927d823f-6545-47a6-b9d6-3437c4f3d493" containerID="9a12d874a7daefe8e253d9173e720323ab02536fdb775d142644267c688c0494" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.240354 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1206-account-create-update-gbdkb" event={"ID":"927d823f-6545-47a6-b9d6-3437c4f3d493","Type":"ContainerDied","Data":"9a12d874a7daefe8e253d9173e720323ab02536fdb775d142644267c688c0494"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.242158 4828 generic.go:334] "Generic (PLEG): container finished" podID="ee34a7f9-16ab-4a44-855c-ed865e5d0331" containerID="e1dbf7eec1e0bad3cf1456fdee03d377086998ad799a0bedea4951e9feb62407" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.242227 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6816-account-create-update-m6qkv" event={"ID":"ee34a7f9-16ab-4a44-855c-ed865e5d0331","Type":"ContainerDied","Data":"e1dbf7eec1e0bad3cf1456fdee03d377086998ad799a0bedea4951e9feb62407"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.244053 4828 generic.go:334] "Generic (PLEG): container finished" podID="26e2b4f0-bbde-48b4-9c44-12e59b1548b9" containerID="fde4972dea806434f8795fd8ee837363678bb829dd54701ae495292f27e14ca9" exitCode=0 Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.244090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2566-account-create-update-m95nq" event={"ID":"26e2b4f0-bbde-48b4-9c44-12e59b1548b9","Type":"ContainerDied","Data":"fde4972dea806434f8795fd8ee837363678bb829dd54701ae495292f27e14ca9"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.244120 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2566-account-create-update-m95nq" event={"ID":"26e2b4f0-bbde-48b4-9c44-12e59b1548b9","Type":"ContainerStarted","Data":"c3ce30b05f04677bab4de7a7d7a862e14da78bb44f099b3a9867ba673ad74e46"} Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.606849 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.641295 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4695\" (UniqueName: \"kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695\") pod \"2cdeb5e1-cc93-4735-9968-0643cf836b22\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.641524 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts\") pod \"2cdeb5e1-cc93-4735-9968-0643cf836b22\" (UID: \"2cdeb5e1-cc93-4735-9968-0643cf836b22\") " Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.642461 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cdeb5e1-cc93-4735-9968-0643cf836b22" (UID: "2cdeb5e1-cc93-4735-9968-0643cf836b22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.648147 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695" (OuterVolumeSpecName: "kube-api-access-b4695") pod "2cdeb5e1-cc93-4735-9968-0643cf836b22" (UID: "2cdeb5e1-cc93-4735-9968-0643cf836b22"). InnerVolumeSpecName "kube-api-access-b4695". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.743537 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4695\" (UniqueName: \"kubernetes.io/projected/2cdeb5e1-cc93-4735-9968-0643cf836b22-kube-api-access-b4695\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:40 crc kubenswrapper[4828]: I1129 07:22:40.743578 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdeb5e1-cc93-4735-9968-0643cf836b22-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.257016 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nbs6p" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.257014 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nbs6p" event={"ID":"2cdeb5e1-cc93-4735-9968-0643cf836b22","Type":"ContainerDied","Data":"0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2"} Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.257497 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f011e9dbfaa356f9da2f472d845516e90ad5c76de08ae0a188d5f94cdeac1b2" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.262455 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"8679dc909eda4be8b5511743e817d4c334dc0a2608ad1de89925da4e3b74d376"} Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.262495 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"34c89b31bc1516c75caac4840a3a0314a788b5a6c1d6119156e081fba3888b3a"} Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.644225 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.658029 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts\") pod \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.658135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkwg4\" (UniqueName: \"kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4\") pod \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\" (UID: \"26e2b4f0-bbde-48b4-9c44-12e59b1548b9\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.658807 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e2b4f0-bbde-48b4-9c44-12e59b1548b9" (UID: "26e2b4f0-bbde-48b4-9c44-12e59b1548b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.672663 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4" (OuterVolumeSpecName: "kube-api-access-lkwg4") pod "26e2b4f0-bbde-48b4-9c44-12e59b1548b9" (UID: "26e2b4f0-bbde-48b4-9c44-12e59b1548b9"). InnerVolumeSpecName "kube-api-access-lkwg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.760078 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.760114 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkwg4\" (UniqueName: \"kubernetes.io/projected/26e2b4f0-bbde-48b4-9c44-12e59b1548b9-kube-api-access-lkwg4\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.814081 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gn86f" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.832875 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.833240 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861135 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861194 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7wm\" (UniqueName: \"kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm\") pod \"178b5736-03a6-439e-b1b8-b123b85d1876\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861253 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts\") pod \"178b5736-03a6-439e-b1b8-b123b85d1876\" (UID: \"178b5736-03a6-439e-b1b8-b123b85d1876\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861322 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts\") pod \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861348 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjf6\" (UniqueName: \"kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6\") pod \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\" (UID: \"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861370 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts\") pod \"fc59d5d0-a534-49b4-977f-c0c787929ad7\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.861581 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkhpl\" (UniqueName: \"kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl\") pod \"fc59d5d0-a534-49b4-977f-c0c787929ad7\" (UID: \"fc59d5d0-a534-49b4-977f-c0c787929ad7\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.862045 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" (UID: "4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.862456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "178b5736-03a6-439e-b1b8-b123b85d1876" (UID: "178b5736-03a6-439e-b1b8-b123b85d1876"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.864157 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc59d5d0-a534-49b4-977f-c0c787929ad7" (UID: "fc59d5d0-a534-49b4-977f-c0c787929ad7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.864653 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm" (OuterVolumeSpecName: "kube-api-access-sx7wm") pod "178b5736-03a6-439e-b1b8-b123b85d1876" (UID: "178b5736-03a6-439e-b1b8-b123b85d1876"). InnerVolumeSpecName "kube-api-access-sx7wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.865723 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6" (OuterVolumeSpecName: "kube-api-access-lzjf6") pod "4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" (UID: "4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8"). InnerVolumeSpecName "kube-api-access-lzjf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.870604 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl" (OuterVolumeSpecName: "kube-api-access-nkhpl") pod "fc59d5d0-a534-49b4-977f-c0c787929ad7" (UID: "fc59d5d0-a534-49b4-977f-c0c787929ad7"). InnerVolumeSpecName "kube-api-access-nkhpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.874012 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.893138 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962408 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbt58\" (UniqueName: \"kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58\") pod \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962459 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2clcq\" (UniqueName: \"kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq\") pod \"b394d40e-1759-4220-a59f-9d5d90957634\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962512 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts\") pod \"927d823f-6545-47a6-b9d6-3437c4f3d493\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962536 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts\") pod \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\" (UID: \"ee34a7f9-16ab-4a44-855c-ed865e5d0331\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962564 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts\") pod \"b394d40e-1759-4220-a59f-9d5d90957634\" (UID: \"b394d40e-1759-4220-a59f-9d5d90957634\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962604 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmbks\" (UniqueName: \"kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks\") pod \"927d823f-6545-47a6-b9d6-3437c4f3d493\" (UID: \"927d823f-6545-47a6-b9d6-3437c4f3d493\") " Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962842 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7wm\" (UniqueName: \"kubernetes.io/projected/178b5736-03a6-439e-b1b8-b123b85d1876-kube-api-access-sx7wm\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962877 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/178b5736-03a6-439e-b1b8-b123b85d1876-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962887 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962895 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjf6\" (UniqueName: \"kubernetes.io/projected/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8-kube-api-access-lzjf6\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962905 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc59d5d0-a534-49b4-977f-c0c787929ad7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.962913 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkhpl\" (UniqueName: \"kubernetes.io/projected/fc59d5d0-a534-49b4-977f-c0c787929ad7-kube-api-access-nkhpl\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.963370 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "927d823f-6545-47a6-b9d6-3437c4f3d493" (UID: "927d823f-6545-47a6-b9d6-3437c4f3d493"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.963468 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee34a7f9-16ab-4a44-855c-ed865e5d0331" (UID: "ee34a7f9-16ab-4a44-855c-ed865e5d0331"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.963979 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b394d40e-1759-4220-a59f-9d5d90957634" (UID: "b394d40e-1759-4220-a59f-9d5d90957634"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.966158 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58" (OuterVolumeSpecName: "kube-api-access-bbt58") pod "ee34a7f9-16ab-4a44-855c-ed865e5d0331" (UID: "ee34a7f9-16ab-4a44-855c-ed865e5d0331"). InnerVolumeSpecName "kube-api-access-bbt58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.966575 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq" (OuterVolumeSpecName: "kube-api-access-2clcq") pod "b394d40e-1759-4220-a59f-9d5d90957634" (UID: "b394d40e-1759-4220-a59f-9d5d90957634"). InnerVolumeSpecName "kube-api-access-2clcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:41 crc kubenswrapper[4828]: I1129 07:22:41.967721 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks" (OuterVolumeSpecName: "kube-api-access-gmbks") pod "927d823f-6545-47a6-b9d6-3437c4f3d493" (UID: "927d823f-6545-47a6-b9d6-3437c4f3d493"). InnerVolumeSpecName "kube-api-access-gmbks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064851 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927d823f-6545-47a6-b9d6-3437c4f3d493-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064896 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee34a7f9-16ab-4a44-855c-ed865e5d0331-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064909 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b394d40e-1759-4220-a59f-9d5d90957634-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064921 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmbks\" (UniqueName: \"kubernetes.io/projected/927d823f-6545-47a6-b9d6-3437c4f3d493-kube-api-access-gmbks\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064936 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbt58\" (UniqueName: \"kubernetes.io/projected/ee34a7f9-16ab-4a44-855c-ed865e5d0331-kube-api-access-bbt58\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.064947 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2clcq\" (UniqueName: \"kubernetes.io/projected/b394d40e-1759-4220-a59f-9d5d90957634-kube-api-access-2clcq\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.272046 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gn86f" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.272788 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gn86f" event={"ID":"178b5736-03a6-439e-b1b8-b123b85d1876","Type":"ContainerDied","Data":"eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.272823 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee995d861b45f7150792bbc2f02f62f06a1497ebd1a09297b2cf1e158f11d86" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.275888 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1206-account-create-update-gbdkb" event={"ID":"927d823f-6545-47a6-b9d6-3437c4f3d493","Type":"ContainerDied","Data":"24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.275923 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f3d967ded535268c7c3ac3b78a8ca49df9b3ab2e9bf21bfdce98080efe0598" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.276008 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1206-account-create-update-gbdkb" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.279982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-216d-account-create-update-znwgr" event={"ID":"4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8","Type":"ContainerDied","Data":"83ed4d879b39f01bb64eec5ffab4e41bab3d94ae5a02b4316e99bc87440804d2"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.280004 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-216d-account-create-update-znwgr" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.280024 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ed4d879b39f01bb64eec5ffab4e41bab3d94ae5a02b4316e99bc87440804d2" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.282704 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjlgk" event={"ID":"fc59d5d0-a534-49b4-977f-c0c787929ad7","Type":"ContainerDied","Data":"043dbde4f2f18573a6ddf35935c1925ca6071e78c0309b174c9925aab4d14724"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.283467 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043dbde4f2f18573a6ddf35935c1925ca6071e78c0309b174c9925aab4d14724" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.282895 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjlgk" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.284456 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd2rb" event={"ID":"b394d40e-1759-4220-a59f-9d5d90957634","Type":"ContainerDied","Data":"7dffb6aeaedd95f423cdd6e2a588d016c6afe809c66cc6386e796e043304bb99"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.284506 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dffb6aeaedd95f423cdd6e2a588d016c6afe809c66cc6386e796e043304bb99" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.284562 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd2rb" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.301748 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6816-account-create-update-m6qkv" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.302315 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6816-account-create-update-m6qkv" event={"ID":"ee34a7f9-16ab-4a44-855c-ed865e5d0331","Type":"ContainerDied","Data":"f1c7b04552445915b4d0f166325def5928554df05ef07eff6a0f952ef99c16ae"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.302379 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c7b04552445915b4d0f166325def5928554df05ef07eff6a0f952ef99c16ae" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.304402 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2566-account-create-update-m95nq" event={"ID":"26e2b4f0-bbde-48b4-9c44-12e59b1548b9","Type":"ContainerDied","Data":"c3ce30b05f04677bab4de7a7d7a862e14da78bb44f099b3a9867ba673ad74e46"} Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.304449 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ce30b05f04677bab4de7a7d7a862e14da78bb44f099b3a9867ba673ad74e46" Nov 29 07:22:42 crc kubenswrapper[4828]: I1129 07:22:42.304465 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2566-account-create-update-m95nq" Nov 29 07:22:49 crc kubenswrapper[4828]: I1129 07:22:49.372466 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"39a3fbd71693b18bdd66354a78dfea97bd511851045a0d640af3c38de59a8653"} Nov 29 07:22:49 crc kubenswrapper[4828]: I1129 07:22:49.373011 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"892f6a7f68a0ee4576e67136cd18592c132038da5de520e1b5250c16285d8d52"} Nov 29 07:22:49 crc kubenswrapper[4828]: I1129 07:22:49.376306 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfbf" event={"ID":"ea442090-ae24-451d-ba14-2d18dbb4076a","Type":"ContainerStarted","Data":"139ff3ec2d599516a6e51591094162dc09581953895deba778ad2c3d27b6f738"} Nov 29 07:22:49 crc kubenswrapper[4828]: I1129 07:22:49.398535 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9lfbf" podStartSLOduration=8.824325316 podStartE2EDuration="18.398505013s" podCreationTimestamp="2025-11-29 07:22:31 +0000 UTC" firstStartedPulling="2025-11-29 07:22:38.866697417 +0000 UTC m=+1298.488773465" lastFinishedPulling="2025-11-29 07:22:48.440877104 +0000 UTC m=+1308.062953162" observedRunningTime="2025-11-29 07:22:49.389063845 +0000 UTC m=+1309.011139903" watchObservedRunningTime="2025-11-29 07:22:49.398505013 +0000 UTC m=+1309.020581071" Nov 29 07:22:52 crc kubenswrapper[4828]: I1129 07:22:52.406429 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"0c31aa64f7b22be66613008a6f462dbba145b6ee0bb37eda6fb3365f838e3edc"} Nov 29 07:22:53 crc kubenswrapper[4828]: I1129 07:22:53.421720 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"8849f63a2cbed7c09c9b68c6b643f673b15dc432409f1f45fa37da1bcb86ee8c"} Nov 29 07:22:54 crc kubenswrapper[4828]: I1129 07:22:54.437532 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"090efac7e42f3dc2158be41ae6f84a03a4f0cf39fe80263719719b352da08ea3"} Nov 29 07:22:55 crc kubenswrapper[4828]: I1129 07:22:55.462310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"f31a14c6ba469dbee80c32d98e5d2dc07ae47ba39b35f458eae6be28fca742a1"} Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.478745 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed93966d-a9d0-456c-b459-f06703deef71","Type":"ContainerStarted","Data":"81738f3d1d9b19e934a53f0b6c9417312e2c4ebe1e16255ab60f7a09531254c3"} Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.482023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wc5ng" event={"ID":"5e2b60cb-6670-4720-8aaf-3db7307905b0","Type":"ContainerStarted","Data":"cb22272f1c7ebd3421c6cee06ec017b778b971a2311ec3aff754e2f293dd8ee9"} Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.520509 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.853496949 podStartE2EDuration="1m10.520484712s" podCreationTimestamp="2025-11-29 07:21:46 +0000 UTC" firstStartedPulling="2025-11-29 07:22:19.968507954 +0000 UTC m=+1279.590584012" lastFinishedPulling="2025-11-29 07:22:48.635495717 +0000 UTC m=+1308.257571775" observedRunningTime="2025-11-29 07:22:56.514639395 +0000 UTC m=+1316.136715473" watchObservedRunningTime="2025-11-29 07:22:56.520484712 +0000 UTC m=+1316.142560770" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.539072 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wc5ng" podStartSLOduration=2.559763332 podStartE2EDuration="37.5390479s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:20.270135278 +0000 UTC m=+1279.892211326" lastFinishedPulling="2025-11-29 07:22:55.249419836 +0000 UTC m=+1314.871495894" observedRunningTime="2025-11-29 07:22:56.534098145 +0000 UTC m=+1316.156174223" watchObservedRunningTime="2025-11-29 07:22:56.5390479 +0000 UTC m=+1316.161123958" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.802986 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803431 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b394d40e-1759-4220-a59f-9d5d90957634" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803455 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b394d40e-1759-4220-a59f-9d5d90957634" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803471 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc59d5d0-a534-49b4-977f-c0c787929ad7" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803487 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc59d5d0-a534-49b4-977f-c0c787929ad7" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803504 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee34a7f9-16ab-4a44-855c-ed865e5d0331" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803512 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee34a7f9-16ab-4a44-855c-ed865e5d0331" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803521 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803529 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803546 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178b5736-03a6-439e-b1b8-b123b85d1876" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803556 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="178b5736-03a6-439e-b1b8-b123b85d1876" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803570 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e2b4f0-bbde-48b4-9c44-12e59b1548b9" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803579 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e2b4f0-bbde-48b4-9c44-12e59b1548b9" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803590 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927d823f-6545-47a6-b9d6-3437c4f3d493" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803597 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="927d823f-6545-47a6-b9d6-3437c4f3d493" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: E1129 07:22:56.803617 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdeb5e1-cc93-4735-9968-0643cf836b22" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803624 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdeb5e1-cc93-4735-9968-0643cf836b22" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803828 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="178b5736-03a6-439e-b1b8-b123b85d1876" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803855 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee34a7f9-16ab-4a44-855c-ed865e5d0331" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803890 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="927d823f-6545-47a6-b9d6-3437c4f3d493" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803903 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdeb5e1-cc93-4735-9968-0643cf836b22" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803915 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e2b4f0-bbde-48b4-9c44-12e59b1548b9" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803936 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b394d40e-1759-4220-a59f-9d5d90957634" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803947 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" containerName="mariadb-account-create-update" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.803959 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc59d5d0-a534-49b4-977f-c0c787929ad7" containerName="mariadb-database-create" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.804925 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.817587 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.826169 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877473 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp75\" (UniqueName: \"kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877520 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877563 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877636 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.877659 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981199 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981244 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981279 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981296 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhp75\" (UniqueName: \"kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981338 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.981393 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.982769 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.983741 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.984357 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.985566 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:56 crc kubenswrapper[4828]: I1129 07:22:56.982191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:57 crc kubenswrapper[4828]: I1129 07:22:57.013842 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhp75\" (UniqueName: \"kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75\") pod \"dnsmasq-dns-77585f5f8c-kc7t2\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:57 crc kubenswrapper[4828]: I1129 07:22:57.141221 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:22:57 crc kubenswrapper[4828]: I1129 07:22:57.597069 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:22:57 crc kubenswrapper[4828]: W1129 07:22:57.615454 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf9e6d63_0e81_41f4_8956_20283653b149.slice/crio-f40ab9509bb49dc4a0e01d9ca653bad62b9a788cbfbdac77c9aa5fe50b1cfc5e WatchSource:0}: Error finding container f40ab9509bb49dc4a0e01d9ca653bad62b9a788cbfbdac77c9aa5fe50b1cfc5e: Status 404 returned error can't find the container with id f40ab9509bb49dc4a0e01d9ca653bad62b9a788cbfbdac77c9aa5fe50b1cfc5e Nov 29 07:22:58 crc kubenswrapper[4828]: I1129 07:22:58.517835 4828 generic.go:334] "Generic (PLEG): container finished" podID="af9e6d63-0e81-41f4-8956-20283653b149" containerID="a5e033a89544c2b1c16c34d9653149e5a1c7d027f9f1db187f8e45acd2f8cdb5" exitCode=0 Nov 29 07:22:58 crc kubenswrapper[4828]: I1129 07:22:58.517899 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" event={"ID":"af9e6d63-0e81-41f4-8956-20283653b149","Type":"ContainerDied","Data":"a5e033a89544c2b1c16c34d9653149e5a1c7d027f9f1db187f8e45acd2f8cdb5"} Nov 29 07:22:58 crc kubenswrapper[4828]: I1129 07:22:58.518124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" event={"ID":"af9e6d63-0e81-41f4-8956-20283653b149","Type":"ContainerStarted","Data":"f40ab9509bb49dc4a0e01d9ca653bad62b9a788cbfbdac77c9aa5fe50b1cfc5e"} Nov 29 07:23:00 crc kubenswrapper[4828]: I1129 07:23:00.537285 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" event={"ID":"af9e6d63-0e81-41f4-8956-20283653b149","Type":"ContainerStarted","Data":"f627145d47a9f75206bb28a9585a99339eabbc54caaefeb45cf3668145267f10"} Nov 29 07:23:00 crc kubenswrapper[4828]: I1129 07:23:00.537793 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:23:00 crc kubenswrapper[4828]: I1129 07:23:00.555511 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podStartSLOduration=4.555490001 podStartE2EDuration="4.555490001s" podCreationTimestamp="2025-11-29 07:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:23:00.552524426 +0000 UTC m=+1320.174600494" watchObservedRunningTime="2025-11-29 07:23:00.555490001 +0000 UTC m=+1320.177566059" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.143493 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.217507 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.221094 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-q2prx" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="dnsmasq-dns" containerID="cri-o://e11bd9624f55cc4017804f0f6964bce48f684d2fb0d376ff52f453ba1bd5506b" gracePeriod=10 Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.614226 4828 generic.go:334] "Generic (PLEG): container finished" podID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerID="e11bd9624f55cc4017804f0f6964bce48f684d2fb0d376ff52f453ba1bd5506b" exitCode=0 Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.614343 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-q2prx" event={"ID":"9ee7db07-ea2d-4f79-b976-70340967aa87","Type":"ContainerDied","Data":"e11bd9624f55cc4017804f0f6964bce48f684d2fb0d376ff52f453ba1bd5506b"} Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.615776 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-q2prx" event={"ID":"9ee7db07-ea2d-4f79-b976-70340967aa87","Type":"ContainerDied","Data":"623e4ff534af4f07f3f4e6edcfa3bdea3cbfec9675c9a9184c6b0d109202a08b"} Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.615831 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623e4ff534af4f07f3f4e6edcfa3bdea3cbfec9675c9a9184c6b0d109202a08b" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.663195 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.761593 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc\") pod \"9ee7db07-ea2d-4f79-b976-70340967aa87\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.761738 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcl7z\" (UniqueName: \"kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z\") pod \"9ee7db07-ea2d-4f79-b976-70340967aa87\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.761778 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb\") pod \"9ee7db07-ea2d-4f79-b976-70340967aa87\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.761812 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb\") pod \"9ee7db07-ea2d-4f79-b976-70340967aa87\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.761945 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config\") pod \"9ee7db07-ea2d-4f79-b976-70340967aa87\" (UID: \"9ee7db07-ea2d-4f79-b976-70340967aa87\") " Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.777383 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z" (OuterVolumeSpecName: "kube-api-access-wcl7z") pod "9ee7db07-ea2d-4f79-b976-70340967aa87" (UID: "9ee7db07-ea2d-4f79-b976-70340967aa87"). InnerVolumeSpecName "kube-api-access-wcl7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.815785 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config" (OuterVolumeSpecName: "config") pod "9ee7db07-ea2d-4f79-b976-70340967aa87" (UID: "9ee7db07-ea2d-4f79-b976-70340967aa87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.827611 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ee7db07-ea2d-4f79-b976-70340967aa87" (UID: "9ee7db07-ea2d-4f79-b976-70340967aa87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.830872 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ee7db07-ea2d-4f79-b976-70340967aa87" (UID: "9ee7db07-ea2d-4f79-b976-70340967aa87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.833017 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ee7db07-ea2d-4f79-b976-70340967aa87" (UID: "9ee7db07-ea2d-4f79-b976-70340967aa87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.865761 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.865830 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcl7z\" (UniqueName: \"kubernetes.io/projected/9ee7db07-ea2d-4f79-b976-70340967aa87-kube-api-access-wcl7z\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.865851 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.865863 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4828]: I1129 07:23:07.865877 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee7db07-ea2d-4f79-b976-70340967aa87-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:08 crc kubenswrapper[4828]: I1129 07:23:08.627129 4828 generic.go:334] "Generic (PLEG): container finished" podID="ea442090-ae24-451d-ba14-2d18dbb4076a" containerID="139ff3ec2d599516a6e51591094162dc09581953895deba778ad2c3d27b6f738" exitCode=0 Nov 29 07:23:08 crc kubenswrapper[4828]: I1129 07:23:08.627242 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfbf" event={"ID":"ea442090-ae24-451d-ba14-2d18dbb4076a","Type":"ContainerDied","Data":"139ff3ec2d599516a6e51591094162dc09581953895deba778ad2c3d27b6f738"} Nov 29 07:23:08 crc kubenswrapper[4828]: I1129 07:23:08.627522 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-q2prx" Nov 29 07:23:08 crc kubenswrapper[4828]: I1129 07:23:08.673985 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:23:08 crc kubenswrapper[4828]: I1129 07:23:08.681509 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-q2prx"] Nov 29 07:23:09 crc kubenswrapper[4828]: I1129 07:23:09.424421 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" path="/var/lib/kubelet/pods/9ee7db07-ea2d-4f79-b976-70340967aa87/volumes" Nov 29 07:23:09 crc kubenswrapper[4828]: I1129 07:23:09.926351 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.031742 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhsps\" (UniqueName: \"kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps\") pod \"ea442090-ae24-451d-ba14-2d18dbb4076a\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.031822 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data\") pod \"ea442090-ae24-451d-ba14-2d18dbb4076a\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.031916 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle\") pod \"ea442090-ae24-451d-ba14-2d18dbb4076a\" (UID: \"ea442090-ae24-451d-ba14-2d18dbb4076a\") " Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.038209 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps" (OuterVolumeSpecName: "kube-api-access-vhsps") pod "ea442090-ae24-451d-ba14-2d18dbb4076a" (UID: "ea442090-ae24-451d-ba14-2d18dbb4076a"). InnerVolumeSpecName "kube-api-access-vhsps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.060885 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea442090-ae24-451d-ba14-2d18dbb4076a" (UID: "ea442090-ae24-451d-ba14-2d18dbb4076a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.079658 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data" (OuterVolumeSpecName: "config-data") pod "ea442090-ae24-451d-ba14-2d18dbb4076a" (UID: "ea442090-ae24-451d-ba14-2d18dbb4076a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.133327 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhsps\" (UniqueName: \"kubernetes.io/projected/ea442090-ae24-451d-ba14-2d18dbb4076a-kube-api-access-vhsps\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.133361 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.133371 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea442090-ae24-451d-ba14-2d18dbb4076a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.644578 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfbf" event={"ID":"ea442090-ae24-451d-ba14-2d18dbb4076a","Type":"ContainerDied","Data":"453dc845c8fc6b765b25667dea729de2d290df34b1f05e23ed5616218ccca600"} Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.644625 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453dc845c8fc6b765b25667dea729de2d290df34b1f05e23ed5616218ccca600" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.644679 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfbf" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.912938 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:10 crc kubenswrapper[4828]: E1129 07:23:10.913387 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="init" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.913408 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="init" Nov 29 07:23:10 crc kubenswrapper[4828]: E1129 07:23:10.913424 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea442090-ae24-451d-ba14-2d18dbb4076a" containerName="keystone-db-sync" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.913430 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea442090-ae24-451d-ba14-2d18dbb4076a" containerName="keystone-db-sync" Nov 29 07:23:10 crc kubenswrapper[4828]: E1129 07:23:10.913442 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="dnsmasq-dns" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.913451 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="dnsmasq-dns" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.913631 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea442090-ae24-451d-ba14-2d18dbb4076a" containerName="keystone-db-sync" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.913656 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee7db07-ea2d-4f79-b976-70340967aa87" containerName="dnsmasq-dns" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.914664 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.930564 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.962084 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-42d82"] Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.963458 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.972635 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5wkrh" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.972856 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.972921 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.973162 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.973174 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:23:10 crc kubenswrapper[4828]: I1129 07:23:10.984672 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42d82"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046355 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046467 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8bv\" (UniqueName: \"kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046512 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046617 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046650 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.046675 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.055572 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-4tb4g"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.057021 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.061618 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8vljh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.061914 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.086853 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4tb4g"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.147862 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.147930 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.147973 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8bv\" (UniqueName: \"kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.147990 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148008 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148035 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148067 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148092 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148149 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148180 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnkkm\" (UniqueName: \"kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148197 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.148217 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.149245 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.149856 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.151022 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.151743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.151757 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.193474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8bv\" (UniqueName: \"kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv\") pod \"dnsmasq-dns-55fff446b9-pr5vw\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.208336 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.210382 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.214660 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.218343 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.240620 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.245114 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.263065 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.264312 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.264365 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn6mb\" (UniqueName: \"kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.264416 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.264446 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.265192 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.265260 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.265383 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.265448 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnkkm\" (UniqueName: \"kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.265558 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.277291 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.279573 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.280708 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.288194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.288453 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.296345 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dwxw5"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.305607 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.310939 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vphwh"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.312419 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.312786 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-49vhl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.313172 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.313848 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.320707 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.323751 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.329655 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kfl2r" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.330012 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.341151 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dwxw5"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.342503 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnkkm\" (UniqueName: \"kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm\") pod \"keystone-bootstrap-42d82\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.351603 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t8dd8"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.354092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.363756 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.364712 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.364907 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.365760 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l7tgb" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.366889 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.366928 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.366947 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn6mb\" (UniqueName: \"kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.366968 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.366987 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vql\" (UniqueName: \"kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.369335 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vphwh"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.367008 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371571 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371627 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371664 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371694 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pph8\" (UniqueName: \"kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371757 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371799 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371823 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371847 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371879 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371936 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.371972 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25wd\" (UniqueName: \"kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372007 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372036 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372062 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zsk\" (UniqueName: \"kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372108 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372143 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372179 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372233 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372309 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372355 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372438 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372503 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.372535 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfs8\" (UniqueName: \"kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.375939 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.382564 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8dd8"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.399432 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.406902 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mhgs8"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.408236 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.412605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn6mb\" (UniqueName: \"kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb\") pod \"heat-db-sync-4tb4g\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.413033 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.413261 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.415035 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2hc7w" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475106 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475141 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pph8\" (UniqueName: \"kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475187 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475218 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475892 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475929 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.475976 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476010 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25wd\" (UniqueName: \"kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476118 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zsk\" (UniqueName: \"kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476178 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476219 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476254 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476309 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476393 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476465 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476496 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476607 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476636 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlsgz\" (UniqueName: \"kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476698 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfs8\" (UniqueName: \"kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476781 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476839 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vql\" (UniqueName: \"kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.476919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.483825 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mhgs8"] Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.502492 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.511450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.512813 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.514483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.515322 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.515988 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.516759 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.516819 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.517390 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.517941 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.518014 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.518470 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.519155 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.521688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.539020 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.543980 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pph8\" (UniqueName: \"kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.544483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25wd\" (UniqueName: \"kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.545415 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.552860 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.552898 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.555701 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vql\" (UniqueName: \"kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.556159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zsk\" (UniqueName: \"kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk\") pod \"dnsmasq-dns-76fcf4b695-qsztl\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.558503 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle\") pod \"barbican-db-sync-vphwh\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.560582 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle\") pod \"placement-db-sync-t8dd8\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.562048 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts\") pod \"cinder-db-sync-dwxw5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.562926 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.563722 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfs8\" (UniqueName: \"kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.565867 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data\") pod \"ceilometer-0\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.574970 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vphwh" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.577975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.578138 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlsgz\" (UniqueName: \"kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.578173 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.580013 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.591927 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42d82" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.595799 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.596105 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.602075 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlsgz\" (UniqueName: \"kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.610794 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config\") pod \"neutron-db-sync-mhgs8\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.671217 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8dd8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.683754 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4tb4g" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.683961 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:23:11 crc kubenswrapper[4828]: I1129 07:23:11.929600 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.104376 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dwxw5"] Nov 29 07:23:12 crc kubenswrapper[4828]: W1129 07:23:12.126522 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3d2548_679c_4c58_8709_a28f3178c1d5.slice/crio-95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f WatchSource:0}: Error finding container 95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f: Status 404 returned error can't find the container with id 95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.286575 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vphwh"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.339582 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42d82"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.411688 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.432951 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.599912 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mhgs8"] Nov 29 07:23:12 crc kubenswrapper[4828]: W1129 07:23:12.603116 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70dc014d_201b_448d_84ba_2c89e7c10855.slice/crio-ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9 WatchSource:0}: Error finding container ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9: Status 404 returned error can't find the container with id ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9 Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.666297 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8dd8"] Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.694573 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dwxw5" event={"ID":"3d3d2548-679c-4c58-8709-a28f3178c1d5","Type":"ContainerStarted","Data":"95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.697493 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42d82" event={"ID":"1b10deca-68bc-4694-b1b5-dd907a68af44","Type":"ContainerStarted","Data":"6696b83fab2a1b215f8fd108965abfaf7b930e2e4a2dfe3ff28006da38c06912"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.699188 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" event={"ID":"297f4501-d996-4d63-8936-a65af6acf060","Type":"ContainerStarted","Data":"e35289949e6efe7c0ef1864e4556e69598109931c3d0d7197f56d29cf9fddd5d"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.702485 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8dd8" event={"ID":"b6340ac2-1618-4eab-9dce-47cffd0957b3","Type":"ContainerStarted","Data":"98f02e572bbf49cf7871a1e49fc6b8a61693464f4fb0d9d3b3079618d5be0d44"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.704832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" event={"ID":"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f","Type":"ContainerStarted","Data":"e4eb6513ab689e543bc0b5afbe5b8ac35b2f6442fa7fdb34e0017f2e0cfc6a81"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.704870 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" event={"ID":"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f","Type":"ContainerStarted","Data":"c3cfb417705d6100538877e81fd97ff263b5ac6689eeda5eacd08bdefe55667b"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.706457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerStarted","Data":"9e184e59a43be1d261cf4a8b3d4259bbe2a15b9a881c738a52d6d37090df520a"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.707824 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vphwh" event={"ID":"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c","Type":"ContainerStarted","Data":"850bf65a0b38e5a4857c55e31d8de0153ab5ba2d127a3d2c97700f6997162042"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.708830 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mhgs8" event={"ID":"70dc014d-201b-448d-84ba-2c89e7c10855","Type":"ContainerStarted","Data":"ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9"} Nov 29 07:23:12 crc kubenswrapper[4828]: I1129 07:23:12.775177 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4tb4g"] Nov 29 07:23:12 crc kubenswrapper[4828]: W1129 07:23:12.775854 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebec231e_52d4_4a47_9391_c57530dc6de4.slice/crio-75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1 WatchSource:0}: Error finding container 75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1: Status 404 returned error can't find the container with id 75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1 Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.287765 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.734042 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mhgs8" event={"ID":"70dc014d-201b-448d-84ba-2c89e7c10855","Type":"ContainerStarted","Data":"e1e506485c1ea7a4452f3107adefc8e0fc18d9f429760a73eeea4e4d544c8455"} Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.738722 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42d82" event={"ID":"1b10deca-68bc-4694-b1b5-dd907a68af44","Type":"ContainerStarted","Data":"7badf57f351e8ebdc8d8a1fcbfbcc6605bc40a34d847b791f38a37d9316c4595"} Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.741567 4828 generic.go:334] "Generic (PLEG): container finished" podID="297f4501-d996-4d63-8936-a65af6acf060" containerID="4ed7129a7802c28b70533c379c97e03682d4931cc8d90c0eba85420f23046a05" exitCode=0 Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.741650 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" event={"ID":"297f4501-d996-4d63-8936-a65af6acf060","Type":"ContainerDied","Data":"4ed7129a7802c28b70533c379c97e03682d4931cc8d90c0eba85420f23046a05"} Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.745247 4828 generic.go:334] "Generic (PLEG): container finished" podID="680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" containerID="e4eb6513ab689e543bc0b5afbe5b8ac35b2f6442fa7fdb34e0017f2e0cfc6a81" exitCode=0 Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.745469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" event={"ID":"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f","Type":"ContainerDied","Data":"e4eb6513ab689e543bc0b5afbe5b8ac35b2f6442fa7fdb34e0017f2e0cfc6a81"} Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.759805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4tb4g" event={"ID":"ebec231e-52d4-4a47-9391-c57530dc6de4","Type":"ContainerStarted","Data":"75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1"} Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.762676 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mhgs8" podStartSLOduration=2.762647856 podStartE2EDuration="2.762647856s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:23:13.748655723 +0000 UTC m=+1333.370731781" watchObservedRunningTime="2025-11-29 07:23:13.762647856 +0000 UTC m=+1333.384723914" Nov 29 07:23:13 crc kubenswrapper[4828]: I1129 07:23:13.842198 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-42d82" podStartSLOduration=3.842175439 podStartE2EDuration="3.842175439s" podCreationTimestamp="2025-11-29 07:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:23:13.824399892 +0000 UTC m=+1333.446475950" watchObservedRunningTime="2025-11-29 07:23:13.842175439 +0000 UTC m=+1333.464251497" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.058106 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157098 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157251 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157351 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157393 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp8bv\" (UniqueName: \"kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157414 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.157462 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config\") pod \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\" (UID: \"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f\") " Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.166529 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv" (OuterVolumeSpecName: "kube-api-access-jp8bv") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "kube-api-access-jp8bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.189939 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.190712 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.193974 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.204393 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config" (OuterVolumeSpecName: "config") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.207455 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" (UID: "680ea7a6-fa11-48af-9c81-a6ef6a45ac4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259021 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259054 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259064 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp8bv\" (UniqueName: \"kubernetes.io/projected/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-kube-api-access-jp8bv\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259076 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259084 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.259092 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.779187 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.779188 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-pr5vw" event={"ID":"680ea7a6-fa11-48af-9c81-a6ef6a45ac4f","Type":"ContainerDied","Data":"c3cfb417705d6100538877e81fd97ff263b5ac6689eeda5eacd08bdefe55667b"} Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.779343 4828 scope.go:117] "RemoveContainer" containerID="e4eb6513ab689e543bc0b5afbe5b8ac35b2f6442fa7fdb34e0017f2e0cfc6a81" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.790715 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" event={"ID":"297f4501-d996-4d63-8936-a65af6acf060","Type":"ContainerStarted","Data":"1aed01d97a6d5e61d74a4d01b7ea8a3d7b40de8a01fac207324a5c17c163bbd6"} Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.791548 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.844488 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" podStartSLOduration=3.8444643640000002 podStartE2EDuration="3.844464364s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:23:14.839099259 +0000 UTC m=+1334.461175327" watchObservedRunningTime="2025-11-29 07:23:14.844464364 +0000 UTC m=+1334.466540422" Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.913337 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:14 crc kubenswrapper[4828]: I1129 07:23:14.924225 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pr5vw"] Nov 29 07:23:15 crc kubenswrapper[4828]: I1129 07:23:15.424876 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" path="/var/lib/kubelet/pods/680ea7a6-fa11-48af-9c81-a6ef6a45ac4f/volumes" Nov 29 07:23:21 crc kubenswrapper[4828]: I1129 07:23:21.598244 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:23:21 crc kubenswrapper[4828]: I1129 07:23:21.682003 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:23:21 crc kubenswrapper[4828]: I1129 07:23:21.682369 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" containerID="cri-o://f627145d47a9f75206bb28a9585a99339eabbc54caaefeb45cf3668145267f10" gracePeriod=10 Nov 29 07:23:22 crc kubenswrapper[4828]: I1129 07:23:22.142588 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Nov 29 07:23:27 crc kubenswrapper[4828]: I1129 07:23:27.143783 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Nov 29 07:23:28 crc kubenswrapper[4828]: I1129 07:23:28.937938 4828 generic.go:334] "Generic (PLEG): container finished" podID="af9e6d63-0e81-41f4-8956-20283653b149" containerID="f627145d47a9f75206bb28a9585a99339eabbc54caaefeb45cf3668145267f10" exitCode=0 Nov 29 07:23:28 crc kubenswrapper[4828]: I1129 07:23:28.938077 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" event={"ID":"af9e6d63-0e81-41f4-8956-20283653b149","Type":"ContainerDied","Data":"f627145d47a9f75206bb28a9585a99339eabbc54caaefeb45cf3668145267f10"} Nov 29 07:23:30 crc kubenswrapper[4828]: E1129 07:23:30.606176 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 29 07:23:30 crc kubenswrapper[4828]: E1129 07:23:30.607476 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn6mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-4tb4g_openstack(ebec231e-52d4-4a47-9391-c57530dc6de4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:30 crc kubenswrapper[4828]: E1129 07:23:30.609098 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-4tb4g" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" Nov 29 07:23:30 crc kubenswrapper[4828]: E1129 07:23:30.964336 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-4tb4g" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" Nov 29 07:23:37 crc kubenswrapper[4828]: I1129 07:23:37.143776 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:23:37 crc kubenswrapper[4828]: I1129 07:23:37.144696 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:23:41 crc kubenswrapper[4828]: I1129 07:23:41.487738 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:23:41 crc kubenswrapper[4828]: I1129 07:23:41.488233 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:23:42 crc kubenswrapper[4828]: I1129 07:23:42.145301 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:23:44 crc kubenswrapper[4828]: E1129 07:23:44.378652 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 07:23:44 crc kubenswrapper[4828]: E1129 07:23:44.379523 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-t8dd8_openstack(b6340ac2-1618-4eab-9dce-47cffd0957b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:44 crc kubenswrapper[4828]: E1129 07:23:44.381101 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-t8dd8" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" Nov 29 07:23:45 crc kubenswrapper[4828]: I1129 07:23:45.166546 4828 generic.go:334] "Generic (PLEG): container finished" podID="1b10deca-68bc-4694-b1b5-dd907a68af44" containerID="7badf57f351e8ebdc8d8a1fcbfbcc6605bc40a34d847b791f38a37d9316c4595" exitCode=0 Nov 29 07:23:45 crc kubenswrapper[4828]: I1129 07:23:45.166644 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42d82" event={"ID":"1b10deca-68bc-4694-b1b5-dd907a68af44","Type":"ContainerDied","Data":"7badf57f351e8ebdc8d8a1fcbfbcc6605bc40a34d847b791f38a37d9316c4595"} Nov 29 07:23:45 crc kubenswrapper[4828]: E1129 07:23:45.170179 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-t8dd8" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" Nov 29 07:23:47 crc kubenswrapper[4828]: I1129 07:23:47.146218 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:23:52 crc kubenswrapper[4828]: I1129 07:23:52.147938 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:23:57 crc kubenswrapper[4828]: I1129 07:23:57.149800 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.310536 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42d82" event={"ID":"1b10deca-68bc-4694-b1b5-dd907a68af44","Type":"ContainerDied","Data":"6696b83fab2a1b215f8fd108965abfaf7b930e2e4a2dfe3ff28006da38c06912"} Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.310870 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6696b83fab2a1b215f8fd108965abfaf7b930e2e4a2dfe3ff28006da38c06912" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.379852 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42d82" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552665 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552753 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnkkm\" (UniqueName: \"kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552797 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552863 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552938 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.552969 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data\") pod \"1b10deca-68bc-4694-b1b5-dd907a68af44\" (UID: \"1b10deca-68bc-4694-b1b5-dd907a68af44\") " Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.562155 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm" (OuterVolumeSpecName: "kube-api-access-pnkkm") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "kube-api-access-pnkkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.562950 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.563701 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.568504 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts" (OuterVolumeSpecName: "scripts") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.586471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.589884 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data" (OuterVolumeSpecName: "config-data") pod "1b10deca-68bc-4694-b1b5-dd907a68af44" (UID: "1b10deca-68bc-4694-b1b5-dd907a68af44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655711 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655747 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnkkm\" (UniqueName: \"kubernetes.io/projected/1b10deca-68bc-4694-b1b5-dd907a68af44-kube-api-access-pnkkm\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655759 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655768 4828 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655777 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:01 crc kubenswrapper[4828]: I1129 07:24:01.655786 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b10deca-68bc-4694-b1b5-dd907a68af44-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.150889 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.322056 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42d82" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.472159 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-42d82"] Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.479507 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-42d82"] Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.600297 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nzmt7"] Nov 29 07:24:02 crc kubenswrapper[4828]: E1129 07:24:02.600857 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b10deca-68bc-4694-b1b5-dd907a68af44" containerName="keystone-bootstrap" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.600891 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b10deca-68bc-4694-b1b5-dd907a68af44" containerName="keystone-bootstrap" Nov 29 07:24:02 crc kubenswrapper[4828]: E1129 07:24:02.600929 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" containerName="init" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.600935 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" containerName="init" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.601188 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b10deca-68bc-4694-b1b5-dd907a68af44" containerName="keystone-bootstrap" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.601222 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="680ea7a6-fa11-48af-9c81-a6ef6a45ac4f" containerName="init" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.601982 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.605651 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.605703 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.605746 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5wkrh" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.605783 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.605753 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.610776 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzmt7"] Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.778806 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.778868 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbj6\" (UniqueName: \"kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.778967 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.779012 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.779047 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.779065 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: E1129 07:24:02.814717 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 29 07:24:02 crc kubenswrapper[4828]: E1129 07:24:02.815602 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pph8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dwxw5_openstack(3d3d2548-679c-4c58-8709-a28f3178c1d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:02 crc kubenswrapper[4828]: E1129 07:24:02.817051 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dwxw5" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880551 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880611 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880671 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880693 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880775 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.880804 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftbj6\" (UniqueName: \"kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.892582 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.900837 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.901054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.901942 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftbj6\" (UniqueName: \"kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.902362 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.914731 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data\") pod \"keystone-bootstrap-nzmt7\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:02 crc kubenswrapper[4828]: I1129 07:24:02.976436 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.334031 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e2b60cb-6670-4720-8aaf-3db7307905b0" containerID="cb22272f1c7ebd3421c6cee06ec017b778b971a2311ec3aff754e2f293dd8ee9" exitCode=0 Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.334845 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wc5ng" event={"ID":"5e2b60cb-6670-4720-8aaf-3db7307905b0","Type":"ContainerDied","Data":"cb22272f1c7ebd3421c6cee06ec017b778b971a2311ec3aff754e2f293dd8ee9"} Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.336231 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dwxw5" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.383412 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.383711 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb8hb9h667h5dfh5b9h5cch57h5c8h6h59bh76h67bh5fh5dch5b5h57h5c9h659h5d4h554h67bh9dh54ch545hbdh9fh5d4h87hfhddh659h8cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndfs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(adf23e65-d886-48b7-b5b8-8f23a81cdc81): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92\": context canceled" logger="UnhandledError" Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.402188 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.402564 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d25wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vphwh_openstack(b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:03 crc kubenswrapper[4828]: E1129 07:24:03.404310 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vphwh" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.424808 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b10deca-68bc-4694-b1b5-dd907a68af44" path="/var/lib/kubelet/pods/1b10deca-68bc-4694-b1b5-dd907a68af44/volumes" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.475969 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.591756 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.591857 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhp75\" (UniqueName: \"kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.591910 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.591943 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.592656 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.592703 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc\") pod \"af9e6d63-0e81-41f4-8956-20283653b149\" (UID: \"af9e6d63-0e81-41f4-8956-20283653b149\") " Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.596540 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75" (OuterVolumeSpecName: "kube-api-access-qhp75") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "kube-api-access-qhp75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.640669 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.640747 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.640746 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.643969 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config" (OuterVolumeSpecName: "config") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.643999 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af9e6d63-0e81-41f4-8956-20283653b149" (UID: "af9e6d63-0e81-41f4-8956-20283653b149"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695433 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhp75\" (UniqueName: \"kubernetes.io/projected/af9e6d63-0e81-41f4-8956-20283653b149-kube-api-access-qhp75\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695466 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695475 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695485 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695496 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:03 crc kubenswrapper[4828]: I1129 07:24:03.695504 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9e6d63-0e81-41f4-8956-20283653b149-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.287859 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzmt7"] Nov 29 07:24:04 crc kubenswrapper[4828]: W1129 07:24:04.296595 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod786488d0_cd0e_4b05_b8da_dc01f712028c.slice/crio-d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c WatchSource:0}: Error finding container d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c: Status 404 returned error can't find the container with id d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.356368 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8dd8" event={"ID":"b6340ac2-1618-4eab-9dce-47cffd0957b3","Type":"ContainerStarted","Data":"9f7edfd69e625429b1becd952f48c4aee55552a65674746822db26bfa77810c6"} Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.367778 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4tb4g" event={"ID":"ebec231e-52d4-4a47-9391-c57530dc6de4","Type":"ContainerStarted","Data":"277fcaa2500b14c70f6b46ca7c02783a5a575b2a979c1f55f3d3cc531fa3b0a6"} Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.372471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzmt7" event={"ID":"786488d0-cd0e-4b05-b8da-dc01f712028c","Type":"ContainerStarted","Data":"d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c"} Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.376847 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.378618 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" event={"ID":"af9e6d63-0e81-41f4-8956-20283653b149","Type":"ContainerDied","Data":"f40ab9509bb49dc4a0e01d9ca653bad62b9a788cbfbdac77c9aa5fe50b1cfc5e"} Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.378690 4828 scope.go:117] "RemoveContainer" containerID="f627145d47a9f75206bb28a9585a99339eabbc54caaefeb45cf3668145267f10" Nov 29 07:24:04 crc kubenswrapper[4828]: E1129 07:24:04.381014 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vphwh" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.382236 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t8dd8" podStartSLOduration=2.2255285320000002 podStartE2EDuration="53.382198155s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="2025-11-29 07:23:12.681459613 +0000 UTC m=+1332.303535671" lastFinishedPulling="2025-11-29 07:24:03.838129226 +0000 UTC m=+1383.460205294" observedRunningTime="2025-11-29 07:24:04.379008174 +0000 UTC m=+1384.001084232" watchObservedRunningTime="2025-11-29 07:24:04.382198155 +0000 UTC m=+1384.004274213" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.406557 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-4tb4g" podStartSLOduration=2.347022074 podStartE2EDuration="53.406537608s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="2025-11-29 07:23:12.78411557 +0000 UTC m=+1332.406191628" lastFinishedPulling="2025-11-29 07:24:03.843631104 +0000 UTC m=+1383.465707162" observedRunningTime="2025-11-29 07:24:04.396718631 +0000 UTC m=+1384.018794689" watchObservedRunningTime="2025-11-29 07:24:04.406537608 +0000 UTC m=+1384.028613666" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.445195 4828 scope.go:117] "RemoveContainer" containerID="a5e033a89544c2b1c16c34d9653149e5a1c7d027f9f1db187f8e45acd2f8cdb5" Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.478865 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:24:04 crc kubenswrapper[4828]: I1129 07:24:04.491149 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-kc7t2"] Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.080751 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wc5ng" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.230978 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data\") pod \"5e2b60cb-6670-4720-8aaf-3db7307905b0\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.231462 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgdpb\" (UniqueName: \"kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb\") pod \"5e2b60cb-6670-4720-8aaf-3db7307905b0\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.231616 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle\") pod \"5e2b60cb-6670-4720-8aaf-3db7307905b0\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.231649 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data\") pod \"5e2b60cb-6670-4720-8aaf-3db7307905b0\" (UID: \"5e2b60cb-6670-4720-8aaf-3db7307905b0\") " Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.240434 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5e2b60cb-6670-4720-8aaf-3db7307905b0" (UID: "5e2b60cb-6670-4720-8aaf-3db7307905b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.240610 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb" (OuterVolumeSpecName: "kube-api-access-jgdpb") pod "5e2b60cb-6670-4720-8aaf-3db7307905b0" (UID: "5e2b60cb-6670-4720-8aaf-3db7307905b0"). InnerVolumeSpecName "kube-api-access-jgdpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.275168 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e2b60cb-6670-4720-8aaf-3db7307905b0" (UID: "5e2b60cb-6670-4720-8aaf-3db7307905b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.282494 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data" (OuterVolumeSpecName: "config-data") pod "5e2b60cb-6670-4720-8aaf-3db7307905b0" (UID: "5e2b60cb-6670-4720-8aaf-3db7307905b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.333321 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.333354 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.333365 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b60cb-6670-4720-8aaf-3db7307905b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.333374 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgdpb\" (UniqueName: \"kubernetes.io/projected/5e2b60cb-6670-4720-8aaf-3db7307905b0-kube-api-access-jgdpb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.394369 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzmt7" event={"ID":"786488d0-cd0e-4b05-b8da-dc01f712028c","Type":"ContainerStarted","Data":"da276903bc9bdbb57fb309029afa8bb4ee29f2ec9d725aab9bbe149fbb87f59d"} Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.396406 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wc5ng" event={"ID":"5e2b60cb-6670-4720-8aaf-3db7307905b0","Type":"ContainerDied","Data":"bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c"} Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.396482 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd900b2ecebd9b8c3ab5b26529f77965c75af000eac0e956084f89bcf82fe67c" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.396497 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wc5ng" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.427374 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nzmt7" podStartSLOduration=3.427351029 podStartE2EDuration="3.427351029s" podCreationTimestamp="2025-11-29 07:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:05.411834798 +0000 UTC m=+1385.033910856" watchObservedRunningTime="2025-11-29 07:24:05.427351029 +0000 UTC m=+1385.049427087" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.435390 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9e6d63-0e81-41f4-8956-20283653b149" path="/var/lib/kubelet/pods/af9e6d63-0e81-41f4-8956-20283653b149/volumes" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.796679 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:24:05 crc kubenswrapper[4828]: E1129 07:24:05.797178 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" containerName="glance-db-sync" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.797195 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" containerName="glance-db-sync" Nov 29 07:24:05 crc kubenswrapper[4828]: E1129 07:24:05.797225 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="init" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.797234 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="init" Nov 29 07:24:05 crc kubenswrapper[4828]: E1129 07:24:05.797286 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.797297 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.797491 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" containerName="glance-db-sync" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.797512 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.798755 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.821107 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942224 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzw8\" (UniqueName: \"kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942313 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942387 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942411 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942454 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:05 crc kubenswrapper[4828]: I1129 07:24:05.942487 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046324 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzw8\" (UniqueName: \"kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046385 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046472 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046503 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.046527 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.047515 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.047910 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.048121 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.048209 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.048714 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.089486 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzw8\" (UniqueName: \"kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8\") pod \"dnsmasq-dns-8b5c85b87-cwn7b\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.130159 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.692675 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.694551 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.701361 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.701803 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.701867 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghtfr" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.711348 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877714 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5b4\" (UniqueName: \"kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877767 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877831 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877867 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877899 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877932 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.877975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.959499 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.961218 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.963640 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.974492 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5b4\" (UniqueName: \"kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979483 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979572 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979617 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979670 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.979711 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.980257 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.980924 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.988001 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:06 crc kubenswrapper[4828]: I1129 07:24:06.988523 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.000852 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.002386 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.012165 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5b4\" (UniqueName: \"kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.020598 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.029245 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081667 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081744 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081805 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jjr\" (UniqueName: \"kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081868 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081903 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081930 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.081969 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.151467 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-kc7t2" podUID="af9e6d63-0e81-41f4-8956-20283653b149" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.183857 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.183982 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184019 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184085 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jjr\" (UniqueName: \"kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184146 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184194 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.184511 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.185022 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.185640 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.190957 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.197180 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.210739 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.221034 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jjr\" (UniqueName: \"kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.268293 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:07 crc kubenswrapper[4828]: I1129 07:24:07.288082 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:08 crc kubenswrapper[4828]: I1129 07:24:08.657962 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:08 crc kubenswrapper[4828]: I1129 07:24:08.729567 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:10 crc kubenswrapper[4828]: W1129 07:24:10.282527 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f40f2a_e1df_4854_89af_848d2f1a7c86.slice/crio-9791105ff99d49a2dfa89b1ffb8c3db0a2233c30113e9546242ec62db46d4880 WatchSource:0}: Error finding container 9791105ff99d49a2dfa89b1ffb8c3db0a2233c30113e9546242ec62db46d4880: Status 404 returned error can't find the container with id 9791105ff99d49a2dfa89b1ffb8c3db0a2233c30113e9546242ec62db46d4880 Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.294058 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:10 crc kubenswrapper[4828]: W1129 07:24:10.401733 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb66a69_5eb2_4468_b7b9_beb16a814a76.slice/crio-142bfaa1c79c48774d04a0a6eaee4e6675c5f536650322935549a069b2d093ec WatchSource:0}: Error finding container 142bfaa1c79c48774d04a0a6eaee4e6675c5f536650322935549a069b2d093ec: Status 404 returned error can't find the container with id 142bfaa1c79c48774d04a0a6eaee4e6675c5f536650322935549a069b2d093ec Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.405785 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.426941 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:10 crc kubenswrapper[4828]: W1129 07:24:10.435827 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa41676e_b478_4599_b322_54e49002614f.slice/crio-a58ac3e29ad409f434fc18dfb3da7c4e5a139499fafc88ac3c52e8f7155259ef WatchSource:0}: Error finding container a58ac3e29ad409f434fc18dfb3da7c4e5a139499fafc88ac3c52e8f7155259ef: Status 404 returned error can't find the container with id a58ac3e29ad409f434fc18dfb3da7c4e5a139499fafc88ac3c52e8f7155259ef Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.453596 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" event={"ID":"dcb66a69-5eb2-4468-b7b9-beb16a814a76","Type":"ContainerStarted","Data":"142bfaa1c79c48774d04a0a6eaee4e6675c5f536650322935549a069b2d093ec"} Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.468443 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerStarted","Data":"fe522dbd0ea27352e0f8636a247d322a8779a414aafdd3620a462b6ada4215f8"} Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.475062 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerStarted","Data":"9791105ff99d49a2dfa89b1ffb8c3db0a2233c30113e9546242ec62db46d4880"} Nov 29 07:24:10 crc kubenswrapper[4828]: I1129 07:24:10.478907 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerStarted","Data":"a58ac3e29ad409f434fc18dfb3da7c4e5a139499fafc88ac3c52e8f7155259ef"} Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.486931 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.487312 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.490110 4828 generic.go:334] "Generic (PLEG): container finished" podID="b6340ac2-1618-4eab-9dce-47cffd0957b3" containerID="9f7edfd69e625429b1becd952f48c4aee55552a65674746822db26bfa77810c6" exitCode=0 Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.490185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8dd8" event={"ID":"b6340ac2-1618-4eab-9dce-47cffd0957b3","Type":"ContainerDied","Data":"9f7edfd69e625429b1becd952f48c4aee55552a65674746822db26bfa77810c6"} Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.507838 4828 generic.go:334] "Generic (PLEG): container finished" podID="786488d0-cd0e-4b05-b8da-dc01f712028c" containerID="da276903bc9bdbb57fb309029afa8bb4ee29f2ec9d725aab9bbe149fbb87f59d" exitCode=0 Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.507939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzmt7" event={"ID":"786488d0-cd0e-4b05-b8da-dc01f712028c","Type":"ContainerDied","Data":"da276903bc9bdbb57fb309029afa8bb4ee29f2ec9d725aab9bbe149fbb87f59d"} Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.523284 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerStarted","Data":"efd451b1af34c1fcac7649f4d90ad7ffcd56b5edff7a4d7e4af5ef5da75a4fb9"} Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.538138 4828 generic.go:334] "Generic (PLEG): container finished" podID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerID="4ed0e00170fde8fe7004be8b28332476b23b37697c0991c5e2bcf071281ba217" exitCode=0 Nov 29 07:24:11 crc kubenswrapper[4828]: I1129 07:24:11.538215 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" event={"ID":"dcb66a69-5eb2-4468-b7b9-beb16a814a76","Type":"ContainerDied","Data":"4ed0e00170fde8fe7004be8b28332476b23b37697c0991c5e2bcf071281ba217"} Nov 29 07:24:12 crc kubenswrapper[4828]: I1129 07:24:12.553857 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerStarted","Data":"f2e7828ce8ce159b8ecaed85998af9be750766ded39637e150614ebfe79d3ff2"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.057801 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8dd8" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.066045 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.110911 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts\") pod \"b6340ac2-1618-4eab-9dce-47cffd0957b3\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.111020 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.111054 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle\") pod \"b6340ac2-1618-4eab-9dce-47cffd0957b3\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.111161 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112061 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data\") pod \"b6340ac2-1618-4eab-9dce-47cffd0957b3\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112099 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vql\" (UniqueName: \"kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql\") pod \"b6340ac2-1618-4eab-9dce-47cffd0957b3\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112117 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbj6\" (UniqueName: \"kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112154 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs\") pod \"b6340ac2-1618-4eab-9dce-47cffd0957b3\" (UID: \"b6340ac2-1618-4eab-9dce-47cffd0957b3\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112212 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112310 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data\") pod \"786488d0-cd0e-4b05-b8da-dc01f712028c\" (UID: \"786488d0-cd0e-4b05-b8da-dc01f712028c\") " Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.112600 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs" (OuterVolumeSpecName: "logs") pod "b6340ac2-1618-4eab-9dce-47cffd0957b3" (UID: "b6340ac2-1618-4eab-9dce-47cffd0957b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.113544 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6340ac2-1618-4eab-9dce-47cffd0957b3-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.119476 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts" (OuterVolumeSpecName: "scripts") pod "b6340ac2-1618-4eab-9dce-47cffd0957b3" (UID: "b6340ac2-1618-4eab-9dce-47cffd0957b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.120500 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.122564 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts" (OuterVolumeSpecName: "scripts") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.123564 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6" (OuterVolumeSpecName: "kube-api-access-ftbj6") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "kube-api-access-ftbj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.129560 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql" (OuterVolumeSpecName: "kube-api-access-42vql") pod "b6340ac2-1618-4eab-9dce-47cffd0957b3" (UID: "b6340ac2-1618-4eab-9dce-47cffd0957b3"). InnerVolumeSpecName "kube-api-access-42vql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.142200 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.152896 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6340ac2-1618-4eab-9dce-47cffd0957b3" (UID: "b6340ac2-1618-4eab-9dce-47cffd0957b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.156602 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.157383 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data" (OuterVolumeSpecName: "config-data") pod "786488d0-cd0e-4b05-b8da-dc01f712028c" (UID: "786488d0-cd0e-4b05-b8da-dc01f712028c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.159011 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data" (OuterVolumeSpecName: "config-data") pod "b6340ac2-1618-4eab-9dce-47cffd0957b3" (UID: "b6340ac2-1618-4eab-9dce-47cffd0957b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.214888 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.214940 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.214953 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.214986 4828 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.214999 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6340ac2-1618-4eab-9dce-47cffd0957b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.215012 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vql\" (UniqueName: \"kubernetes.io/projected/b6340ac2-1618-4eab-9dce-47cffd0957b3-kube-api-access-42vql\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.215025 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftbj6\" (UniqueName: \"kubernetes.io/projected/786488d0-cd0e-4b05-b8da-dc01f712028c-kube-api-access-ftbj6\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.215036 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.215046 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.215056 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786488d0-cd0e-4b05-b8da-dc01f712028c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.563969 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzmt7" event={"ID":"786488d0-cd0e-4b05-b8da-dc01f712028c","Type":"ContainerDied","Data":"d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.564009 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43221b8c0f8a0dd34c2d5b1fb0c23b1c78ddf9c9f8a2317572b9ad843c9e47c" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.564064 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzmt7" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.571218 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerStarted","Data":"19a5d5abc81766af504b997763028757b19326a3568cb2c432973c242c21ab1c"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.571315 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-log" containerID="cri-o://efd451b1af34c1fcac7649f4d90ad7ffcd56b5edff7a4d7e4af5ef5da75a4fb9" gracePeriod=30 Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.571326 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-httpd" containerID="cri-o://19a5d5abc81766af504b997763028757b19326a3568cb2c432973c242c21ab1c" gracePeriod=30 Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.574207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerStarted","Data":"4b905e242f430339cedbbf326b020991b7abcf4d65145905344dc37097855cfe"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.574352 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-log" containerID="cri-o://f2e7828ce8ce159b8ecaed85998af9be750766ded39637e150614ebfe79d3ff2" gracePeriod=30 Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.574437 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-httpd" containerID="cri-o://4b905e242f430339cedbbf326b020991b7abcf4d65145905344dc37097855cfe" gracePeriod=30 Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.587068 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8dd8" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.588399 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8dd8" event={"ID":"b6340ac2-1618-4eab-9dce-47cffd0957b3","Type":"ContainerDied","Data":"98f02e572bbf49cf7871a1e49fc6b8a61693464f4fb0d9d3b3079618d5be0d44"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.588459 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f02e572bbf49cf7871a1e49fc6b8a61693464f4fb0d9d3b3079618d5be0d44" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.598563 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" event={"ID":"dcb66a69-5eb2-4468-b7b9-beb16a814a76","Type":"ContainerStarted","Data":"d0285cb48b7fee68afd4b7e46dd2e1c37c6a857c96473acc3179b48542a1e1b4"} Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.598858 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.611960 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f9777c7b8-ctgxk"] Nov 29 07:24:13 crc kubenswrapper[4828]: E1129 07:24:13.612475 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786488d0-cd0e-4b05-b8da-dc01f712028c" containerName="keystone-bootstrap" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.612496 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="786488d0-cd0e-4b05-b8da-dc01f712028c" containerName="keystone-bootstrap" Nov 29 07:24:13 crc kubenswrapper[4828]: E1129 07:24:13.612529 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" containerName="placement-db-sync" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.612537 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" containerName="placement-db-sync" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.612783 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" containerName="placement-db-sync" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.612806 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="786488d0-cd0e-4b05-b8da-dc01f712028c" containerName="keystone-bootstrap" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.614189 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.616047 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.616023584 podStartE2EDuration="8.616023584s" podCreationTimestamp="2025-11-29 07:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:13.605035068 +0000 UTC m=+1393.227111136" watchObservedRunningTime="2025-11-29 07:24:13.616023584 +0000 UTC m=+1393.238099642" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.619600 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-config-data\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.619820 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bb4de2-38a8-4361-9f97-d2932fc3bba6-logs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.620017 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-scripts\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.620105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rjjz\" (UniqueName: \"kubernetes.io/projected/32bb4de2-38a8-4361-9f97-d2932fc3bba6-kube-api-access-7rjjz\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.620221 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-internal-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.620339 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-combined-ca-bundle\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.620541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-public-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.623089 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.623962 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.624419 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.626686 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.636521 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l7tgb" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.660843 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.660821323 podStartE2EDuration="8.660821323s" podCreationTimestamp="2025-11-29 07:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:13.634959372 +0000 UTC m=+1393.257035430" watchObservedRunningTime="2025-11-29 07:24:13.660821323 +0000 UTC m=+1393.282897381" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.661371 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9777c7b8-ctgxk"] Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.669230 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" podStartSLOduration=8.669214645 podStartE2EDuration="8.669214645s" podCreationTimestamp="2025-11-29 07:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:13.663543002 +0000 UTC m=+1393.285619060" watchObservedRunningTime="2025-11-29 07:24:13.669214645 +0000 UTC m=+1393.291290703" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.723338 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-scripts\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.723416 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rjjz\" (UniqueName: \"kubernetes.io/projected/32bb4de2-38a8-4361-9f97-d2932fc3bba6-kube-api-access-7rjjz\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.723450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-combined-ca-bundle\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.723476 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-internal-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.723509 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-public-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.730172 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-scripts\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.730574 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-combined-ca-bundle\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.731291 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-public-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.732573 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-config-data\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.733110 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-internal-tls-certs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.733181 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bb4de2-38a8-4361-9f97-d2932fc3bba6-logs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.734555 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bb4de2-38a8-4361-9f97-d2932fc3bba6-logs\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.748331 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-757484cf46-h2rvl"] Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.751136 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb4de2-38a8-4361-9f97-d2932fc3bba6-config-data\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.751942 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.754718 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rjjz\" (UniqueName: \"kubernetes.io/projected/32bb4de2-38a8-4361-9f97-d2932fc3bba6-kube-api-access-7rjjz\") pod \"placement-5f9777c7b8-ctgxk\" (UID: \"32bb4de2-38a8-4361-9f97-d2932fc3bba6\") " pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.755408 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.755628 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.755796 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.755946 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.756102 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5wkrh" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.756388 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.760135 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-757484cf46-h2rvl"] Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.933091 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.936985 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-fernet-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937053 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-internal-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937115 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-combined-ca-bundle\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937139 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq766\" (UniqueName: \"kubernetes.io/projected/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-kube-api-access-jq766\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937743 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-credential-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937884 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-config-data\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.937920 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-public-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:13 crc kubenswrapper[4828]: I1129 07:24:13.938210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-scripts\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.039740 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-internal-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.039819 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-combined-ca-bundle\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.039837 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq766\" (UniqueName: \"kubernetes.io/projected/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-kube-api-access-jq766\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.039863 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-credential-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.040155 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-config-data\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.040203 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-public-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.040246 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-scripts\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.040303 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-fernet-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.049498 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-combined-ca-bundle\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.052027 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-config-data\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.052751 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-internal-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.053877 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-credential-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.054442 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-fernet-keys\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.055833 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-public-tls-certs\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.056931 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-scripts\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.057688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq766\" (UniqueName: \"kubernetes.io/projected/55ea6c63-9a3a-42da-92c0-08ba9bd1efbe-kube-api-access-jq766\") pod \"keystone-757484cf46-h2rvl\" (UID: \"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe\") " pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.073908 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.389980 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9777c7b8-ctgxk"] Nov 29 07:24:14 crc kubenswrapper[4828]: W1129 07:24:14.391659 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32bb4de2_38a8_4361_9f97_d2932fc3bba6.slice/crio-606551748d8db040a8291fa2aaddbc626276d5fcef70dc7f85a5ac1b48fe8da0 WatchSource:0}: Error finding container 606551748d8db040a8291fa2aaddbc626276d5fcef70dc7f85a5ac1b48fe8da0: Status 404 returned error can't find the container with id 606551748d8db040a8291fa2aaddbc626276d5fcef70dc7f85a5ac1b48fe8da0 Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.577490 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-757484cf46-h2rvl"] Nov 29 07:24:14 crc kubenswrapper[4828]: W1129 07:24:14.585823 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ea6c63_9a3a_42da_92c0_08ba9bd1efbe.slice/crio-8ba736d0660c0012858c026896531346abd7cbda131f8f1eb2fd621ba323053a WatchSource:0}: Error finding container 8ba736d0660c0012858c026896531346abd7cbda131f8f1eb2fd621ba323053a: Status 404 returned error can't find the container with id 8ba736d0660c0012858c026896531346abd7cbda131f8f1eb2fd621ba323053a Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.608796 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9777c7b8-ctgxk" event={"ID":"32bb4de2-38a8-4361-9f97-d2932fc3bba6","Type":"ContainerStarted","Data":"606551748d8db040a8291fa2aaddbc626276d5fcef70dc7f85a5ac1b48fe8da0"} Nov 29 07:24:14 crc kubenswrapper[4828]: I1129 07:24:14.610799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757484cf46-h2rvl" event={"ID":"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe","Type":"ContainerStarted","Data":"8ba736d0660c0012858c026896531346abd7cbda131f8f1eb2fd621ba323053a"} Nov 29 07:24:15 crc kubenswrapper[4828]: I1129 07:24:15.621939 4828 generic.go:334] "Generic (PLEG): container finished" podID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerID="efd451b1af34c1fcac7649f4d90ad7ffcd56b5edff7a4d7e4af5ef5da75a4fb9" exitCode=143 Nov 29 07:24:15 crc kubenswrapper[4828]: I1129 07:24:15.622023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerDied","Data":"efd451b1af34c1fcac7649f4d90ad7ffcd56b5edff7a4d7e4af5ef5da75a4fb9"} Nov 29 07:24:15 crc kubenswrapper[4828]: I1129 07:24:15.625623 4828 generic.go:334] "Generic (PLEG): container finished" podID="aa41676e-b478-4599-b322-54e49002614f" containerID="f2e7828ce8ce159b8ecaed85998af9be750766ded39637e150614ebfe79d3ff2" exitCode=143 Nov 29 07:24:15 crc kubenswrapper[4828]: I1129 07:24:15.625671 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerDied","Data":"f2e7828ce8ce159b8ecaed85998af9be750766ded39637e150614ebfe79d3ff2"} Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.131487 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.194384 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.195406 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" containerID="cri-o://1aed01d97a6d5e61d74a4d01b7ea8a3d7b40de8a01fac207324a5c17c163bbd6" gracePeriod=10 Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.351435 4828 generic.go:334] "Generic (PLEG): container finished" podID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerID="19a5d5abc81766af504b997763028757b19326a3568cb2c432973c242c21ab1c" exitCode=0 Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.351500 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerDied","Data":"19a5d5abc81766af504b997763028757b19326a3568cb2c432973c242c21ab1c"} Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.354925 4828 generic.go:334] "Generic (PLEG): container finished" podID="aa41676e-b478-4599-b322-54e49002614f" containerID="4b905e242f430339cedbbf326b020991b7abcf4d65145905344dc37097855cfe" exitCode=0 Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.354971 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerDied","Data":"4b905e242f430339cedbbf326b020991b7abcf4d65145905344dc37097855cfe"} Nov 29 07:24:21 crc kubenswrapper[4828]: I1129 07:24:21.598537 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Nov 29 07:24:22 crc kubenswrapper[4828]: I1129 07:24:22.365730 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757484cf46-h2rvl" event={"ID":"55ea6c63-9a3a-42da-92c0-08ba9bd1efbe","Type":"ContainerStarted","Data":"a96e433136e7acfa1daa60ec549a7261f91f63cc7901852c95a0a06dc7be63ed"} Nov 29 07:24:22 crc kubenswrapper[4828]: I1129 07:24:22.366827 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:22 crc kubenswrapper[4828]: I1129 07:24:22.369934 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9777c7b8-ctgxk" event={"ID":"32bb4de2-38a8-4361-9f97-d2932fc3bba6","Type":"ContainerStarted","Data":"e53201163e201fb37398c761480e5c63f9e0399b091d24fec0e2d0f13af10176"} Nov 29 07:24:22 crc kubenswrapper[4828]: I1129 07:24:22.392515 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-757484cf46-h2rvl" podStartSLOduration=9.392491461 podStartE2EDuration="9.392491461s" podCreationTimestamp="2025-11-29 07:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:22.388693175 +0000 UTC m=+1402.010769233" watchObservedRunningTime="2025-11-29 07:24:22.392491461 +0000 UTC m=+1402.014567519" Nov 29 07:24:23 crc kubenswrapper[4828]: I1129 07:24:23.381862 4828 generic.go:334] "Generic (PLEG): container finished" podID="297f4501-d996-4d63-8936-a65af6acf060" containerID="1aed01d97a6d5e61d74a4d01b7ea8a3d7b40de8a01fac207324a5c17c163bbd6" exitCode=0 Nov 29 07:24:23 crc kubenswrapper[4828]: I1129 07:24:23.381942 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" event={"ID":"297f4501-d996-4d63-8936-a65af6acf060","Type":"ContainerDied","Data":"1aed01d97a6d5e61d74a4d01b7ea8a3d7b40de8a01fac207324a5c17c163bbd6"} Nov 29 07:24:31 crc kubenswrapper[4828]: I1129 07:24:31.597241 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.326028 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.386246 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.396902 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476073 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68jjr\" (UniqueName: \"kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476148 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476182 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476211 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476236 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476255 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476284 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle\") pod \"aa41676e-b478-4599-b322-54e49002614f\" (UID: \"aa41676e-b478-4599-b322-54e49002614f\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.476805 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs" (OuterVolumeSpecName: "logs") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.477020 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.482457 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.482516 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr" (OuterVolumeSpecName: "kube-api-access-68jjr") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "kube-api-access-68jjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.482701 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.483015 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0f40f2a-e1df-4854-89af-848d2f1a7c86","Type":"ContainerDied","Data":"9791105ff99d49a2dfa89b1ffb8c3db0a2233c30113e9546242ec62db46d4880"} Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.483133 4828 scope.go:117] "RemoveContainer" containerID="19a5d5abc81766af504b997763028757b19326a3568cb2c432973c242c21ab1c" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.484442 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts" (OuterVolumeSpecName: "scripts") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.492758 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa41676e-b478-4599-b322-54e49002614f","Type":"ContainerDied","Data":"a58ac3e29ad409f434fc18dfb3da7c4e5a139499fafc88ac3c52e8f7155259ef"} Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.492852 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.495558 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" event={"ID":"297f4501-d996-4d63-8936-a65af6acf060","Type":"ContainerDied","Data":"e35289949e6efe7c0ef1864e4556e69598109931c3d0d7197f56d29cf9fddd5d"} Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.495582 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.506719 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.534906 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data" (OuterVolumeSpecName: "config-data") pod "aa41676e-b478-4599-b322-54e49002614f" (UID: "aa41676e-b478-4599-b322-54e49002614f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577110 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577179 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577218 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zsk\" (UniqueName: \"kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577396 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577440 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px5b4\" (UniqueName: \"kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577569 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577614 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577653 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577710 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577763 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577816 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577856 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts\") pod \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\" (UID: \"d0f40f2a-e1df-4854-89af-848d2f1a7c86\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.577886 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb\") pod \"297f4501-d996-4d63-8936-a65af6acf060\" (UID: \"297f4501-d996-4d63-8936-a65af6acf060\") " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578331 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68jjr\" (UniqueName: \"kubernetes.io/projected/aa41676e-b478-4599-b322-54e49002614f-kube-api-access-68jjr\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578359 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578371 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa41676e-b478-4599-b322-54e49002614f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578784 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578807 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578820 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578832 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa41676e-b478-4599-b322-54e49002614f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.578571 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs" (OuterVolumeSpecName: "logs") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.579190 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.581010 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk" (OuterVolumeSpecName: "kube-api-access-v7zsk") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "kube-api-access-v7zsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.583136 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4" (OuterVolumeSpecName: "kube-api-access-px5b4") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "kube-api-access-px5b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.590975 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.597944 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts" (OuterVolumeSpecName: "scripts") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.604399 4828 scope.go:117] "RemoveContainer" containerID="efd451b1af34c1fcac7649f4d90ad7ffcd56b5edff7a4d7e4af5ef5da75a4fb9" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.604772 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.608988 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.630942 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.631476 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config" (OuterVolumeSpecName: "config") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.632858 4828 scope.go:117] "RemoveContainer" containerID="4b905e242f430339cedbbf326b020991b7abcf4d65145905344dc37097855cfe" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.634103 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.635266 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.639298 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data" (OuterVolumeSpecName: "config-data") pod "d0f40f2a-e1df-4854-89af-848d2f1a7c86" (UID: "d0f40f2a-e1df-4854-89af-848d2f1a7c86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.658666 4828 scope.go:117] "RemoveContainer" containerID="f2e7828ce8ce159b8ecaed85998af9be750766ded39637e150614ebfe79d3ff2" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.659206 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "297f4501-d996-4d63-8936-a65af6acf060" (UID: "297f4501-d996-4d63-8936-a65af6acf060"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.677088 4828 scope.go:117] "RemoveContainer" containerID="1aed01d97a6d5e61d74a4d01b7ea8a3d7b40de8a01fac207324a5c17c163bbd6" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680023 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680063 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680079 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680093 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680108 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f40f2a-e1df-4854-89af-848d2f1a7c86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680120 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7zsk\" (UniqueName: \"kubernetes.io/projected/297f4501-d996-4d63-8936-a65af6acf060-kube-api-access-v7zsk\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680134 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680145 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px5b4\" (UniqueName: \"kubernetes.io/projected/d0f40f2a-e1df-4854-89af-848d2f1a7c86-kube-api-access-px5b4\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680156 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680167 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680177 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0f40f2a-e1df-4854-89af-848d2f1a7c86-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680188 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/297f4501-d996-4d63-8936-a65af6acf060-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680199 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.680246 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.699135 4828 scope.go:117] "RemoveContainer" containerID="4ed7129a7802c28b70533c379c97e03682d4931cc8d90c0eba85420f23046a05" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.699256 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.781592 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.821764 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.843402 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.853310 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.861904 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-qsztl"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.883907 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884483 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884512 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884543 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884552 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884569 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884577 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884594 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884601 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884614 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884622 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: E1129 07:24:32.884633 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="init" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884641 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="init" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884848 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884869 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884884 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884896 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" containerName="glance-httpd" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.884912 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa41676e-b478-4599-b322-54e49002614f" containerName="glance-log" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.892438 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.892587 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.895552 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.895624 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghtfr" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.895805 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.895561 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.903819 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.926575 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.933652 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.935123 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.937224 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.937296 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:24:32 crc kubenswrapper[4828]: I1129 07:24:32.942923 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086212 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086530 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086605 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086672 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhg7p\" (UniqueName: \"kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086751 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086832 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.086976 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzmn\" (UniqueName: \"kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087128 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087177 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087242 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087416 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087526 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087621 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087776 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087960 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.087998 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190052 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190337 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190425 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190217 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190524 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190693 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190779 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190837 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190853 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.190976 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191021 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191177 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191270 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhg7p\" (UniqueName: \"kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191357 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191455 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzmn\" (UniqueName: \"kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191543 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191602 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191641 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191375 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.191761 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.195514 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.200469 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.213788 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.214493 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.215749 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzmn\" (UniqueName: \"kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.216002 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.219138 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.219764 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.220191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhg7p\" (UniqueName: \"kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.224014 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.228036 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.244758 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.248862 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.258716 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: E1129 07:24:33.264950 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 29 07:24:33 crc kubenswrapper[4828]: E1129 07:24:33.265199 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndfs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(adf23e65-d886-48b7-b5b8-8f23a81cdc81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.276703 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.427113 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297f4501-d996-4d63-8936-a65af6acf060" path="/var/lib/kubelet/pods/297f4501-d996-4d63-8936-a65af6acf060/volumes" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.428571 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa41676e-b478-4599-b322-54e49002614f" path="/var/lib/kubelet/pods/aa41676e-b478-4599-b322-54e49002614f/volumes" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.429996 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f40f2a-e1df-4854-89af-848d2f1a7c86" path="/var/lib/kubelet/pods/d0f40f2a-e1df-4854-89af-848d2f1a7c86/volumes" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.522540 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.533023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9777c7b8-ctgxk" event={"ID":"32bb4de2-38a8-4361-9f97-d2932fc3bba6","Type":"ContainerStarted","Data":"312d349fd3701449db21791c6bddba90903e85afaadd8f3d319be8b3ec815599"} Nov 29 07:24:33 crc kubenswrapper[4828]: I1129 07:24:33.810160 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:24:34 crc kubenswrapper[4828]: I1129 07:24:34.108950 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:24:34 crc kubenswrapper[4828]: W1129 07:24:34.130526 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17156cfb_ec83_47db_955b_44f5045179e8.slice/crio-1af37e5634fd5e5e30f3adf849f4a319cc82ca909ed0de28d5fe3cc382eb7722 WatchSource:0}: Error finding container 1af37e5634fd5e5e30f3adf849f4a319cc82ca909ed0de28d5fe3cc382eb7722: Status 404 returned error can't find the container with id 1af37e5634fd5e5e30f3adf849f4a319cc82ca909ed0de28d5fe3cc382eb7722 Nov 29 07:24:34 crc kubenswrapper[4828]: I1129 07:24:34.544145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerStarted","Data":"919ead4c35cf27d001138199848080fb757693e4076b5f1a2deded54e5139bfe"} Nov 29 07:24:34 crc kubenswrapper[4828]: I1129 07:24:34.545567 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerStarted","Data":"1af37e5634fd5e5e30f3adf849f4a319cc82ca909ed0de28d5fe3cc382eb7722"} Nov 29 07:24:35 crc kubenswrapper[4828]: I1129 07:24:35.558311 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:35 crc kubenswrapper[4828]: I1129 07:24:35.558708 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:35 crc kubenswrapper[4828]: I1129 07:24:35.585595 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f9777c7b8-ctgxk" podStartSLOduration=22.58557178 podStartE2EDuration="22.58557178s" podCreationTimestamp="2025-11-29 07:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:35.578123433 +0000 UTC m=+1415.200199491" watchObservedRunningTime="2025-11-29 07:24:35.58557178 +0000 UTC m=+1415.207647838" Nov 29 07:24:36 crc kubenswrapper[4828]: I1129 07:24:36.570362 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerStarted","Data":"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c"} Nov 29 07:24:36 crc kubenswrapper[4828]: I1129 07:24:36.574551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerStarted","Data":"c62ae5c9596d3334c6233ec70ed886ac71d9d21882603ace7ef5e193a9ec13b5"} Nov 29 07:24:36 crc kubenswrapper[4828]: I1129 07:24:36.598822 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-qsztl" podUID="297f4501-d996-4d63-8936-a65af6acf060" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Nov 29 07:24:37 crc kubenswrapper[4828]: I1129 07:24:37.588803 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerStarted","Data":"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003"} Nov 29 07:24:37 crc kubenswrapper[4828]: I1129 07:24:37.593932 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerStarted","Data":"b7b44572a0f02a5f5f6641ea4e39ebe00423ae62a08bf8e3342f933c94616f77"} Nov 29 07:24:38 crc kubenswrapper[4828]: I1129 07:24:38.644081 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.644062554 podStartE2EDuration="6.644062554s" podCreationTimestamp="2025-11-29 07:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:38.638565615 +0000 UTC m=+1418.260641673" watchObservedRunningTime="2025-11-29 07:24:38.644062554 +0000 UTC m=+1418.266138612" Nov 29 07:24:38 crc kubenswrapper[4828]: I1129 07:24:38.678852 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.67883158 podStartE2EDuration="6.67883158s" podCreationTimestamp="2025-11-29 07:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:38.671082394 +0000 UTC m=+1418.293158472" watchObservedRunningTime="2025-11-29 07:24:38.67883158 +0000 UTC m=+1418.300907638" Nov 29 07:24:39 crc kubenswrapper[4828]: I1129 07:24:39.226883 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:40 crc kubenswrapper[4828]: I1129 07:24:40.586197 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9777c7b8-ctgxk" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.486673 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.487050 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.487099 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.488040 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.488106 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457" gracePeriod=600 Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.703902 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457" exitCode=0 Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.703965 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457"} Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.704057 4828 scope.go:117] "RemoveContainer" containerID="c82e0ff81acb7d01ceef87bfa4d82fd7e8308a493da4b0fdc2e7187d68f7ed64" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.708041 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vphwh" event={"ID":"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c","Type":"ContainerStarted","Data":"7cd5cb7120d24918028551e6727f971b48efa1aa85c5494735482808a6365985"} Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.711997 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dwxw5" event={"ID":"3d3d2548-679c-4c58-8709-a28f3178c1d5","Type":"ContainerStarted","Data":"db6a36a8280d2a912a24e482556690d316cb1450bca2f9da1609125e73d6bbd1"} Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.736426 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vphwh" podStartSLOduration=2.457689871 podStartE2EDuration="1m30.736393759s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="2025-11-29 07:23:12.289924778 +0000 UTC m=+1331.912000836" lastFinishedPulling="2025-11-29 07:24:40.568628666 +0000 UTC m=+1420.190704724" observedRunningTime="2025-11-29 07:24:41.725195037 +0000 UTC m=+1421.347271095" watchObservedRunningTime="2025-11-29 07:24:41.736393759 +0000 UTC m=+1421.358469817" Nov 29 07:24:41 crc kubenswrapper[4828]: I1129 07:24:41.749165 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dwxw5" podStartSLOduration=2.317684622 podStartE2EDuration="1m30.74913979s" podCreationTimestamp="2025-11-29 07:23:11 +0000 UTC" firstStartedPulling="2025-11-29 07:23:12.139052026 +0000 UTC m=+1331.761128084" lastFinishedPulling="2025-11-29 07:24:40.570507194 +0000 UTC m=+1420.192583252" observedRunningTime="2025-11-29 07:24:41.74397559 +0000 UTC m=+1421.366051658" watchObservedRunningTime="2025-11-29 07:24:41.74913979 +0000 UTC m=+1421.371215848" Nov 29 07:24:42 crc kubenswrapper[4828]: I1129 07:24:42.728631 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd"} Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.264543 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.264628 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.303447 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.312058 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.523132 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.523173 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.553782 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.574451 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.739355 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.739738 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.739849 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:24:43 crc kubenswrapper[4828]: I1129 07:24:43.740973 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:24:45 crc kubenswrapper[4828]: I1129 07:24:45.776755 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:24:45 crc kubenswrapper[4828]: I1129 07:24:45.776804 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:24:45 crc kubenswrapper[4828]: I1129 07:24:45.777628 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:24:45 crc kubenswrapper[4828]: I1129 07:24:45.777645 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:24:46 crc kubenswrapper[4828]: I1129 07:24:46.106172 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-757484cf46-h2rvl" Nov 29 07:24:46 crc kubenswrapper[4828]: I1129 07:24:46.227427 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:24:46 crc kubenswrapper[4828]: I1129 07:24:46.228408 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:24:46 crc kubenswrapper[4828]: I1129 07:24:46.273606 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:46 crc kubenswrapper[4828]: I1129 07:24:46.304772 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.250109 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.252251 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.255041 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.255092 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.255554 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9trxw" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.260563 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.431487 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config-secret\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.431588 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-combined-ca-bundle\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.431645 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhww9\" (UniqueName: \"kubernetes.io/projected/262aab08-d0cd-47a7-b913-c3df9daf6739-kube-api-access-dhww9\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.432256 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.534375 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-combined-ca-bundle\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.534469 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhww9\" (UniqueName: \"kubernetes.io/projected/262aab08-d0cd-47a7-b913-c3df9daf6739-kube-api-access-dhww9\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.534540 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.534654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config-secret\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.537106 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.543137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-combined-ca-bundle\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.544623 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/262aab08-d0cd-47a7-b913-c3df9daf6739-openstack-config-secret\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.556415 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhww9\" (UniqueName: \"kubernetes.io/projected/262aab08-d0cd-47a7-b913-c3df9daf6739-kube-api-access-dhww9\") pod \"openstackclient\" (UID: \"262aab08-d0cd-47a7-b913-c3df9daf6739\") " pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: I1129 07:24:50.616461 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:24:50 crc kubenswrapper[4828]: E1129 07:24:50.995360 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:9fd33563f895044a695c9352d34cb144bf53704b61b6cb94fe219ebbb891db92\\\": context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.167826 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:24:51 crc kubenswrapper[4828]: W1129 07:24:51.170825 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod262aab08_d0cd_47a7_b913_c3df9daf6739.slice/crio-d11710d810ea30271e2122024babfd28b61d1bc57cae8c6bb9aa7f06455400da WatchSource:0}: Error finding container d11710d810ea30271e2122024babfd28b61d1bc57cae8c6bb9aa7f06455400da: Status 404 returned error can't find the container with id d11710d810ea30271e2122024babfd28b61d1bc57cae8c6bb9aa7f06455400da Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.850741 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerStarted","Data":"eba412ba97ca4e1bb7ebefabe1aac667c6781c6cb41f43c697e15c933bb1f25f"} Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.851434 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="ceilometer-notification-agent" containerID="cri-o://fe522dbd0ea27352e0f8636a247d322a8779a414aafdd3620a462b6ada4215f8" gracePeriod=30 Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.851508 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.851586 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="proxy-httpd" containerID="cri-o://eba412ba97ca4e1bb7ebefabe1aac667c6781c6cb41f43c697e15c933bb1f25f" gracePeriod=30 Nov 29 07:24:51 crc kubenswrapper[4828]: I1129 07:24:51.853784 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"262aab08-d0cd-47a7-b913-c3df9daf6739","Type":"ContainerStarted","Data":"d11710d810ea30271e2122024babfd28b61d1bc57cae8c6bb9aa7f06455400da"} Nov 29 07:24:52 crc kubenswrapper[4828]: I1129 07:24:52.873579 4828 generic.go:334] "Generic (PLEG): container finished" podID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerID="eba412ba97ca4e1bb7ebefabe1aac667c6781c6cb41f43c697e15c933bb1f25f" exitCode=0 Nov 29 07:24:52 crc kubenswrapper[4828]: I1129 07:24:52.874146 4828 generic.go:334] "Generic (PLEG): container finished" podID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerID="fe522dbd0ea27352e0f8636a247d322a8779a414aafdd3620a462b6ada4215f8" exitCode=0 Nov 29 07:24:52 crc kubenswrapper[4828]: I1129 07:24:52.873747 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerDied","Data":"eba412ba97ca4e1bb7ebefabe1aac667c6781c6cb41f43c697e15c933bb1f25f"} Nov 29 07:24:52 crc kubenswrapper[4828]: I1129 07:24:52.874192 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerDied","Data":"fe522dbd0ea27352e0f8636a247d322a8779a414aafdd3620a462b6ada4215f8"} Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.132950 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.190932 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfs8\" (UniqueName: \"kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191029 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191056 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191077 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191091 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191148 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191194 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts\") pod \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\" (UID: \"adf23e65-d886-48b7-b5b8-8f23a81cdc81\") " Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.191744 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.192230 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.197171 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.198034 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts" (OuterVolumeSpecName: "scripts") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.199899 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8" (OuterVolumeSpecName: "kube-api-access-ndfs8") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "kube-api-access-ndfs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.250032 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.276944 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data" (OuterVolumeSpecName: "config-data") pod "adf23e65-d886-48b7-b5b8-8f23a81cdc81" (UID: "adf23e65-d886-48b7-b5b8-8f23a81cdc81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293611 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293659 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adf23e65-d886-48b7-b5b8-8f23a81cdc81-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293795 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293807 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293820 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293835 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf23e65-d886-48b7-b5b8-8f23a81cdc81-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.293846 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfs8\" (UniqueName: \"kubernetes.io/projected/adf23e65-d886-48b7-b5b8-8f23a81cdc81-kube-api-access-ndfs8\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.891560 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adf23e65-d886-48b7-b5b8-8f23a81cdc81","Type":"ContainerDied","Data":"9e184e59a43be1d261cf4a8b3d4259bbe2a15b9a881c738a52d6d37090df520a"} Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.891648 4828 scope.go:117] "RemoveContainer" containerID="eba412ba97ca4e1bb7ebefabe1aac667c6781c6cb41f43c697e15c933bb1f25f" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.891865 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.921758 4828 scope.go:117] "RemoveContainer" containerID="fe522dbd0ea27352e0f8636a247d322a8779a414aafdd3620a462b6ada4215f8" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.972984 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.987491 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.998241 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:53 crc kubenswrapper[4828]: E1129 07:24:53.998721 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="ceilometer-notification-agent" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.998739 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="ceilometer-notification-agent" Nov 29 07:24:53 crc kubenswrapper[4828]: E1129 07:24:53.998767 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="proxy-httpd" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.998775 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="proxy-httpd" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.998954 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="ceilometer-notification-agent" Nov 29 07:24:53 crc kubenswrapper[4828]: I1129 07:24:53.998982 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" containerName="proxy-httpd" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.001033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.006444 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.006634 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.009184 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110143 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110213 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110255 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqcr\" (UniqueName: \"kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110557 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.110689 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.212049 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.212118 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.212164 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.212203 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.212245 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.213006 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.213047 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxqcr\" (UniqueName: \"kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.218690 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.218990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.230653 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.231064 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.249293 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.251920 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.266372 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxqcr\" (UniqueName: \"kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr\") pod \"ceilometer-0\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.325440 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.821303 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:54 crc kubenswrapper[4828]: W1129 07:24:54.837856 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d425ec7_4438_4994_b963_6a046f23934f.slice/crio-a7b7f7e24e795c22157f144e7e3168f4980f386f76860860f4c6b699da44c20a WatchSource:0}: Error finding container a7b7f7e24e795c22157f144e7e3168f4980f386f76860860f4c6b699da44c20a: Status 404 returned error can't find the container with id a7b7f7e24e795c22157f144e7e3168f4980f386f76860860f4c6b699da44c20a Nov 29 07:24:54 crc kubenswrapper[4828]: I1129 07:24:54.903545 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerStarted","Data":"a7b7f7e24e795c22157f144e7e3168f4980f386f76860860f4c6b699da44c20a"} Nov 29 07:24:55 crc kubenswrapper[4828]: I1129 07:24:55.430538 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf23e65-d886-48b7-b5b8-8f23a81cdc81" path="/var/lib/kubelet/pods/adf23e65-d886-48b7-b5b8-8f23a81cdc81/volumes" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.327162 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-c8bd5b56c-6wm6v"] Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.330840 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.334383 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.334590 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.335825 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.348372 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c8bd5b56c-6wm6v"] Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459305 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-etc-swift\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459402 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-run-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459453 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-config-data\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459482 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-combined-ca-bundle\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459522 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-public-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459551 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb5bv\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-kube-api-access-gb5bv\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459612 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-log-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.459637 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-internal-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.561803 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-etc-swift\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-run-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562135 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-config-data\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562237 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-combined-ca-bundle\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562325 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-public-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562415 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb5bv\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-kube-api-access-gb5bv\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562535 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-log-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-internal-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.562850 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-run-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.563288 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffaa931d-e049-475f-8a3a-95cdf41bf40f-log-httpd\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.571504 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-combined-ca-bundle\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.571971 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-etc-swift\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.572336 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-config-data\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.577120 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-internal-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.582244 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffaa931d-e049-475f-8a3a-95cdf41bf40f-public-tls-certs\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.582294 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb5bv\" (UniqueName: \"kubernetes.io/projected/ffaa931d-e049-475f-8a3a-95cdf41bf40f-kube-api-access-gb5bv\") pod \"swift-proxy-c8bd5b56c-6wm6v\" (UID: \"ffaa931d-e049-475f-8a3a-95cdf41bf40f\") " pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.661357 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.936747 4828 generic.go:334] "Generic (PLEG): container finished" podID="ebec231e-52d4-4a47-9391-c57530dc6de4" containerID="277fcaa2500b14c70f6b46ca7c02783a5a575b2a979c1f55f3d3cc531fa3b0a6" exitCode=0 Nov 29 07:24:56 crc kubenswrapper[4828]: I1129 07:24:56.936853 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4tb4g" event={"ID":"ebec231e-52d4-4a47-9391-c57530dc6de4","Type":"ContainerDied","Data":"277fcaa2500b14c70f6b46ca7c02783a5a575b2a979c1f55f3d3cc531fa3b0a6"} Nov 29 07:24:57 crc kubenswrapper[4828]: I1129 07:24:57.884996 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:24:57 crc kubenswrapper[4828]: I1129 07:24:57.952181 4828 generic.go:334] "Generic (PLEG): container finished" podID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" containerID="7cd5cb7120d24918028551e6727f971b48efa1aa85c5494735482808a6365985" exitCode=0 Nov 29 07:24:57 crc kubenswrapper[4828]: I1129 07:24:57.952254 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vphwh" event={"ID":"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c","Type":"ContainerDied","Data":"7cd5cb7120d24918028551e6727f971b48efa1aa85c5494735482808a6365985"} Nov 29 07:25:00 crc kubenswrapper[4828]: I1129 07:25:00.990365 4828 generic.go:334] "Generic (PLEG): container finished" podID="3d3d2548-679c-4c58-8709-a28f3178c1d5" containerID="db6a36a8280d2a912a24e482556690d316cb1450bca2f9da1609125e73d6bbd1" exitCode=0 Nov 29 07:25:00 crc kubenswrapper[4828]: I1129 07:25:00.990540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dwxw5" event={"ID":"3d3d2548-679c-4c58-8709-a28f3178c1d5","Type":"ContainerDied","Data":"db6a36a8280d2a912a24e482556690d316cb1450bca2f9da1609125e73d6bbd1"} Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.163487 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.338442 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.338751 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.338924 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.339101 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.339214 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.339442 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pph8\" (UniqueName: \"kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.339575 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle\") pod \"3d3d2548-679c-4c58-8709-a28f3178c1d5\" (UID: \"3d3d2548-679c-4c58-8709-a28f3178c1d5\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.340142 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d3d2548-679c-4c58-8709-a28f3178c1d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.345422 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts" (OuterVolumeSpecName: "scripts") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.346095 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8" (OuterVolumeSpecName: "kube-api-access-8pph8") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "kube-api-access-8pph8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.379660 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.379761 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.430507 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4tb4g" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.442635 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.442674 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pph8\" (UniqueName: \"kubernetes.io/projected/3d3d2548-679c-4c58-8709-a28f3178c1d5-kube-api-access-8pph8\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.442686 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.442700 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.458383 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data" (OuterVolumeSpecName: "config-data") pod "3d3d2548-679c-4c58-8709-a28f3178c1d5" (UID: "3d3d2548-679c-4c58-8709-a28f3178c1d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.544724 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle\") pod \"ebec231e-52d4-4a47-9391-c57530dc6de4\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.544899 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data\") pod \"ebec231e-52d4-4a47-9391-c57530dc6de4\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.545067 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn6mb\" (UniqueName: \"kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb\") pod \"ebec231e-52d4-4a47-9391-c57530dc6de4\" (UID: \"ebec231e-52d4-4a47-9391-c57530dc6de4\") " Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.545841 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d2548-679c-4c58-8709-a28f3178c1d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.550486 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb" (OuterVolumeSpecName: "kube-api-access-jn6mb") pod "ebec231e-52d4-4a47-9391-c57530dc6de4" (UID: "ebec231e-52d4-4a47-9391-c57530dc6de4"). InnerVolumeSpecName "kube-api-access-jn6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.574336 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebec231e-52d4-4a47-9391-c57530dc6de4" (UID: "ebec231e-52d4-4a47-9391-c57530dc6de4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.623818 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data" (OuterVolumeSpecName: "config-data") pod "ebec231e-52d4-4a47-9391-c57530dc6de4" (UID: "ebec231e-52d4-4a47-9391-c57530dc6de4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.648239 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.648295 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn6mb\" (UniqueName: \"kubernetes.io/projected/ebec231e-52d4-4a47-9391-c57530dc6de4-kube-api-access-jn6mb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.648307 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec231e-52d4-4a47-9391-c57530dc6de4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:05 crc kubenswrapper[4828]: I1129 07:25:05.881314 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vphwh" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.044988 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vphwh" event={"ID":"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c","Type":"ContainerDied","Data":"850bf65a0b38e5a4857c55e31d8de0153ab5ba2d127a3d2c97700f6997162042"} Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.045052 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850bf65a0b38e5a4857c55e31d8de0153ab5ba2d127a3d2c97700f6997162042" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.045118 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vphwh" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.048895 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4tb4g" event={"ID":"ebec231e-52d4-4a47-9391-c57530dc6de4","Type":"ContainerDied","Data":"75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1"} Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.048913 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4tb4g" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.048928 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d35ccf4a0301aac48fa685c3195039e7bb1608f23fe19920eb09fd5d23a8e1" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.061016 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data\") pod \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.061106 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25wd\" (UniqueName: \"kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd\") pod \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.061156 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle\") pod \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\" (UID: \"b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c\") " Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.062320 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dwxw5" event={"ID":"3d3d2548-679c-4c58-8709-a28f3178c1d5","Type":"ContainerDied","Data":"95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f"} Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.062384 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e0f940814346a7997985cad5a2437b837279c6866cb456069139475d703c6f" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.062451 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dwxw5" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.068569 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" (UID: "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.070712 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd" (OuterVolumeSpecName: "kube-api-access-d25wd") pod "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" (UID: "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c"). InnerVolumeSpecName "kube-api-access-d25wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.132108 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" (UID: "b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.163772 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.163839 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d25wd\" (UniqueName: \"kubernetes.io/projected/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-kube-api-access-d25wd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.163857 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.646811 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:06 crc kubenswrapper[4828]: E1129 07:25:06.648755 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" containerName="heat-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.648922 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" containerName="heat-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: E1129 07:25:06.649027 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" containerName="barbican-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.649102 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" containerName="barbican-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: E1129 07:25:06.649204 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" containerName="cinder-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.649316 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" containerName="cinder-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.649672 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" containerName="cinder-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.649792 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" containerName="barbican-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.649877 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" containerName="heat-db-sync" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.654295 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.664020 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.669681 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-49vhl" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.669851 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.669970 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676336 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676445 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676488 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsj7\" (UniqueName: \"kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676542 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676612 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.676639 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.702080 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778541 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778617 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778798 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.778926 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsj7\" (UniqueName: \"kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.782261 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.797366 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.798113 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.801743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.817922 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.846781 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.848684 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.897489 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsj7\" (UniqueName: \"kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7\") pod \"cinder-scheduler-0\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899490 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899528 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899558 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899609 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77wc\" (UniqueName: \"kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899693 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.899761 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:06 crc kubenswrapper[4828]: I1129 07:25:06.958362 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.053313 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055238 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055308 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055370 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055402 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77wc\" (UniqueName: \"kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055544 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.055660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.066700 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.067767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.069905 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.076007 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.083449 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.138700 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c8bd5b56c-6wm6v"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.210002 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77wc\" (UniqueName: \"kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc\") pod \"dnsmasq-dns-86d8f7d9df-99rls\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.234814 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.250666 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.250715 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerStarted","Data":"244a9c8b4f7001173670be40f0bf48981cb48e2bd361257e467dec696d0fe172"} Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.250829 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.259880 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"262aab08-d0cd-47a7-b913-c3df9daf6739","Type":"ContainerStarted","Data":"2975bd8ebe8bfd30173950e2a38c3b53b04f7ece6203f0d15bc6b449865c9252"} Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.260404 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.280912 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57b9f79f95-xdwsq"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.282830 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.283439 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" event={"ID":"ffaa931d-e049-475f-8a3a-95cdf41bf40f","Type":"ContainerStarted","Data":"6eab9b70177ffc6977dce14e48fdfa1f30362641f4ea778170a1b323a86f10ca"} Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.291541 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.291786 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.292770 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kfl2r" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.326993 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57b9f79f95-xdwsq"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.383689 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.392771 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393417 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx72q\" (UniqueName: \"kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393632 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data-custom\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393718 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwxf\" (UniqueName: \"kubernetes.io/projected/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-kube-api-access-ljwxf\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393748 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393784 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393916 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-logs\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.393944 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.394094 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.394247 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-combined-ca-bundle\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.394307 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.394348 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.418692 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.563588232 podStartE2EDuration="17.418652445s" podCreationTimestamp="2025-11-29 07:24:50 +0000 UTC" firstStartedPulling="2025-11-29 07:24:51.173381794 +0000 UTC m=+1430.795457852" lastFinishedPulling="2025-11-29 07:25:06.028446007 +0000 UTC m=+1445.650522065" observedRunningTime="2025-11-29 07:25:07.413974515 +0000 UTC m=+1447.036050573" watchObservedRunningTime="2025-11-29 07:25:07.418652445 +0000 UTC m=+1447.040728503" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.489474 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-69488889b8-dcf7m"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.491637 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.499177 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500363 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500435 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500473 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500500 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx72q\" (UniqueName: \"kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500553 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data-custom\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500588 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwxf\" (UniqueName: \"kubernetes.io/projected/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-kube-api-access-ljwxf\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500614 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500685 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-logs\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500710 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.500837 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-combined-ca-bundle\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.506767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-logs\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.509644 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.510559 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-combined-ca-bundle\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.510958 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.511450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.522579 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data-custom\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.530709 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69488889b8-dcf7m"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.544742 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.553751 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwxf\" (UniqueName: \"kubernetes.io/projected/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-kube-api-access-ljwxf\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.557864 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c61053-a1cc-4c19-9042-61c7e4cdaffe-config-data\") pod \"barbican-worker-57b9f79f95-xdwsq\" (UID: \"c9c61053-a1cc-4c19-9042-61c7e4cdaffe\") " pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.558676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.558974 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.559487 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx72q\" (UniqueName: \"kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.565377 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.575179 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.584682 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.595705 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.632886 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhlk\" (UniqueName: \"kubernetes.io/projected/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-kube-api-access-dnhlk\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.632982 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.633060 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-combined-ca-bundle\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.633102 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-logs\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.633140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data-custom\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.691502 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.693469 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.696839 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.722569 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735304 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhlk\" (UniqueName: \"kubernetes.io/projected/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-kube-api-access-dnhlk\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735408 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735457 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmkd\" (UniqueName: \"kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735492 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735546 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-combined-ca-bundle\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735594 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735631 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735656 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-logs\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735687 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.735733 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data-custom\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.740386 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data-custom\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.741911 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-logs\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.751718 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-config-data\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.759159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-combined-ca-bundle\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.766619 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.768068 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhlk\" (UniqueName: \"kubernetes.io/projected/a5b5741a-29b4-4c45-85c7-8c2cb55857a3-kube-api-access-dnhlk\") pod \"barbican-keystone-listener-69488889b8-dcf7m\" (UID: \"a5b5741a-29b4-4c45-85c7-8c2cb55857a3\") " pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.789306 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b9f79f95-xdwsq" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849598 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfmg\" (UniqueName: \"kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849661 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849803 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmkd\" (UniqueName: \"kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849835 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849894 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849956 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.849982 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.850014 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.850049 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.854572 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.855474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.855723 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.855791 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.859179 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.862631 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.878067 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmkd\" (UniqueName: \"kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd\") pod \"dnsmasq-dns-69c986f6d7-f54k9\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.946750 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.955881 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.955956 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.956020 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.956061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gfmg\" (UniqueName: \"kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.956085 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.958816 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.962214 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.963854 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.968973 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:07 crc kubenswrapper[4828]: I1129 07:25:07.984433 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gfmg\" (UniqueName: \"kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg\") pod \"barbican-api-8c6f8f658-jqjcb\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.027317 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.256108 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.311231 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" event={"ID":"ffaa931d-e049-475f-8a3a-95cdf41bf40f","Type":"ContainerStarted","Data":"67ce9703023209151a06a0a533b30ebcd9550b1158444f1df08b48b77713c4c2"} Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.319460 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" event={"ID":"1d5072b7-b87f-4731-b7c5-80430f9d33a7","Type":"ContainerStarted","Data":"f52005ef79a2bec1714293e692942b47d43b784cd8fb0b59905d307af6069625"} Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.515487 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:08 crc kubenswrapper[4828]: W1129 07:25:08.519947 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17b2e97_00d7_47ba_8b5c_c911a171bd27.slice/crio-b713118ffa590e0d0d37a1eaf6ba7bd755dd036c9883fe5a57a7de915cbe3cc6 WatchSource:0}: Error finding container b713118ffa590e0d0d37a1eaf6ba7bd755dd036c9883fe5a57a7de915cbe3cc6: Status 404 returned error can't find the container with id b713118ffa590e0d0d37a1eaf6ba7bd755dd036c9883fe5a57a7de915cbe3cc6 Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.851547 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:08 crc kubenswrapper[4828]: W1129 07:25:08.887482 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85c4ca87_22a9_405d_9c64_0e4863f53625.slice/crio-fe161ba1595f5e6becc0fb5f2a520348cd074fa5303ca6dcc04688d71daaa797 WatchSource:0}: Error finding container fe161ba1595f5e6becc0fb5f2a520348cd074fa5303ca6dcc04688d71daaa797: Status 404 returned error can't find the container with id fe161ba1595f5e6becc0fb5f2a520348cd074fa5303ca6dcc04688d71daaa797 Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.888698 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57b9f79f95-xdwsq"] Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.927694 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:08 crc kubenswrapper[4828]: I1129 07:25:08.962235 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69488889b8-dcf7m"] Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.139231 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.358237 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerStarted","Data":"56f318fdc3e557003e060a2de0fa919e123713829127f0f25af2df001dcbd79f"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.367225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerStarted","Data":"fe161ba1595f5e6becc0fb5f2a520348cd074fa5303ca6dcc04688d71daaa797"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.377913 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" event={"ID":"ffaa931d-e049-475f-8a3a-95cdf41bf40f","Type":"ContainerStarted","Data":"c483f5465560dda480216f377cceae1953659746a977af85fbdc1cb30e94830f"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.379647 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.379692 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.390880 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerStarted","Data":"b713118ffa590e0d0d37a1eaf6ba7bd755dd036c9883fe5a57a7de915cbe3cc6"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.393398 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b9f79f95-xdwsq" event={"ID":"c9c61053-a1cc-4c19-9042-61c7e4cdaffe","Type":"ContainerStarted","Data":"90fe131dde9a4715392a1f6b48e1b83d5717373d4a7cab0e188bada7093fab0e"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.398486 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" event={"ID":"b7fa3104-0c77-4894-98bd-ecc7ab46c914","Type":"ContainerStarted","Data":"f8d914ad42697964d69b3ef5231fb75f85af7d0c1769e8ff4f2cfc98104e07d1"} Nov 29 07:25:09 crc kubenswrapper[4828]: I1129 07:25:09.413754 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" podStartSLOduration=13.413712043 podStartE2EDuration="13.413712043s" podCreationTimestamp="2025-11-29 07:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:09.40033064 +0000 UTC m=+1449.022406708" watchObservedRunningTime="2025-11-29 07:25:09.413712043 +0000 UTC m=+1449.035788101" Nov 29 07:25:10 crc kubenswrapper[4828]: I1129 07:25:10.401155 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:10 crc kubenswrapper[4828]: I1129 07:25:10.412089 4828 generic.go:334] "Generic (PLEG): container finished" podID="1d5072b7-b87f-4731-b7c5-80430f9d33a7" containerID="42253b7280f4514bec075c16ad6cd0271368a12b77b4625a604839b0070a89f6" exitCode=0 Nov 29 07:25:10 crc kubenswrapper[4828]: I1129 07:25:10.413120 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" event={"ID":"1d5072b7-b87f-4731-b7c5-80430f9d33a7","Type":"ContainerDied","Data":"42253b7280f4514bec075c16ad6cd0271368a12b77b4625a604839b0070a89f6"} Nov 29 07:25:11 crc kubenswrapper[4828]: W1129 07:25:11.110809 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18bf2da4_1500_4545_b55a_2a629614b238.slice/crio-0bd9085cb9c5b25986847480b40d7dfc3dab9aa8b383c4c059c5ee31a5aad8d9 WatchSource:0}: Error finding container 0bd9085cb9c5b25986847480b40d7dfc3dab9aa8b383c4c059c5ee31a5aad8d9: Status 404 returned error can't find the container with id 0bd9085cb9c5b25986847480b40d7dfc3dab9aa8b383c4c059c5ee31a5aad8d9 Nov 29 07:25:11 crc kubenswrapper[4828]: I1129 07:25:11.446546 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" event={"ID":"a5b5741a-29b4-4c45-85c7-8c2cb55857a3","Type":"ContainerStarted","Data":"dd7f2f5e39727e87efed14312b3d25abb2e48f30c895eaa465184e91af2098c9"} Nov 29 07:25:11 crc kubenswrapper[4828]: I1129 07:25:11.446601 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerStarted","Data":"0bd9085cb9c5b25986847480b40d7dfc3dab9aa8b383c4c059c5ee31a5aad8d9"} Nov 29 07:25:12 crc kubenswrapper[4828]: I1129 07:25:12.455879 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerStarted","Data":"56b7194e30ca951f8c1038ba9c1065b89eba6bbe44a5b3d4b6b6b73a84d2657f"} Nov 29 07:25:12 crc kubenswrapper[4828]: I1129 07:25:12.461630 4828 generic.go:334] "Generic (PLEG): container finished" podID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerID="fc6fbc08e0b3fdddfbff63d1dc19611b74461daae7afac88225ba380bb81565d" exitCode=0 Nov 29 07:25:12 crc kubenswrapper[4828]: I1129 07:25:12.462758 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" event={"ID":"b7fa3104-0c77-4894-98bd-ecc7ab46c914","Type":"ContainerDied","Data":"fc6fbc08e0b3fdddfbff63d1dc19611b74461daae7afac88225ba380bb81565d"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.045867 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222234 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77wc\" (UniqueName: \"kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222370 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222418 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222567 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.222600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc\") pod \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\" (UID: \"1d5072b7-b87f-4731-b7c5-80430f9d33a7\") " Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.241017 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc" (OuterVolumeSpecName: "kube-api-access-x77wc") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "kube-api-access-x77wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.266458 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.278540 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.292733 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.302829 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config" (OuterVolumeSpecName: "config") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.324673 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.324708 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.324718 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77wc\" (UniqueName: \"kubernetes.io/projected/1d5072b7-b87f-4731-b7c5-80430f9d33a7-kube-api-access-x77wc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.324727 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.324737 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.353590 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d5072b7-b87f-4731-b7c5-80430f9d33a7" (UID: "1d5072b7-b87f-4731-b7c5-80430f9d33a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.426827 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5072b7-b87f-4731-b7c5-80430f9d33a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.521392 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerStarted","Data":"9e747e5ca0b0a38acd3319c454970b7bf9a3262a489221ed8aff488f76ab563f"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.521879 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.521953 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.538429 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" event={"ID":"b7fa3104-0c77-4894-98bd-ecc7ab46c914","Type":"ContainerStarted","Data":"aedb6d85ce604669aa517d5865251356c75afe5dc5f3e805eed2e3b871c99e6a"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.539917 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.568248 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerStarted","Data":"69be8d900beda449bd74c20e7d314976d464ffd7a1f337eaacf1e332ca0795c5"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.574753 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8c6f8f658-jqjcb" podStartSLOduration=6.57472836 podStartE2EDuration="6.57472836s" podCreationTimestamp="2025-11-29 07:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:13.564167988 +0000 UTC m=+1453.186244066" watchObservedRunningTime="2025-11-29 07:25:13.57472836 +0000 UTC m=+1453.196804428" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.596471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerStarted","Data":"1aeb9826496db2985242d15abc52612d57415aa54fa17197b17965086df5031a"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.600102 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" podStartSLOduration=6.600074801 podStartE2EDuration="6.600074801s" podCreationTimestamp="2025-11-29 07:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:13.592358743 +0000 UTC m=+1453.214434811" watchObservedRunningTime="2025-11-29 07:25:13.600074801 +0000 UTC m=+1453.222150859" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.601752 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" event={"ID":"1d5072b7-b87f-4731-b7c5-80430f9d33a7","Type":"ContainerDied","Data":"f52005ef79a2bec1714293e692942b47d43b784cd8fb0b59905d307af6069625"} Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.601848 4828 scope.go:117] "RemoveContainer" containerID="42253b7280f4514bec075c16ad6cd0271368a12b77b4625a604839b0070a89f6" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.602002 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d8f7d9df-99rls" Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.692252 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:13 crc kubenswrapper[4828]: I1129 07:25:13.702648 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d8f7d9df-99rls"] Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.638352 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerStarted","Data":"bbf056d07eb70302ab25f7ae4190b7f5d5a90e65497f461202b1aa5c290f8cd0"} Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.644256 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerStarted","Data":"6bb681a84d5594ab812447577367b4ea5c008575e626da1f5041cc94703d12f0"} Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.644472 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api-log" containerID="cri-o://69be8d900beda449bd74c20e7d314976d464ffd7a1f337eaacf1e332ca0795c5" gracePeriod=30 Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.644634 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api" containerID="cri-o://6bb681a84d5594ab812447577367b4ea5c008575e626da1f5041cc94703d12f0" gracePeriod=30 Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.644813 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.674618 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.674594528 podStartE2EDuration="7.674594528s" podCreationTimestamp="2025-11-29 07:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:14.667016353 +0000 UTC m=+1454.289092421" watchObservedRunningTime="2025-11-29 07:25:14.674594528 +0000 UTC m=+1454.296670586" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.745633 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-cc498b8c4-hstck"] Nov 29 07:25:14 crc kubenswrapper[4828]: E1129 07:25:14.746551 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5072b7-b87f-4731-b7c5-80430f9d33a7" containerName="init" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.746578 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5072b7-b87f-4731-b7c5-80430f9d33a7" containerName="init" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.746865 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5072b7-b87f-4731-b7c5-80430f9d33a7" containerName="init" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.752481 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.755668 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.755744 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.763203 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cc498b8c4-hstck"] Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.872778 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data-custom\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.872854 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-public-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.872884 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/346a1de9-a6d0-451f-8ca9-172d43dc99f9-logs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.873114 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9cq\" (UniqueName: \"kubernetes.io/projected/346a1de9-a6d0-451f-8ca9-172d43dc99f9-kube-api-access-zx9cq\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.873195 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-internal-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.873259 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.873450 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-combined-ca-bundle\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975569 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-public-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975657 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/346a1de9-a6d0-451f-8ca9-172d43dc99f9-logs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975712 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9cq\" (UniqueName: \"kubernetes.io/projected/346a1de9-a6d0-451f-8ca9-172d43dc99f9-kube-api-access-zx9cq\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-internal-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975781 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975834 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-combined-ca-bundle\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.975981 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data-custom\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.976332 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/346a1de9-a6d0-451f-8ca9-172d43dc99f9-logs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.982996 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-public-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.983808 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data-custom\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:14 crc kubenswrapper[4828]: I1129 07:25:14.985522 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-config-data\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:15 crc kubenswrapper[4828]: I1129 07:25:15.000896 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9cq\" (UniqueName: \"kubernetes.io/projected/346a1de9-a6d0-451f-8ca9-172d43dc99f9-kube-api-access-zx9cq\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:15 crc kubenswrapper[4828]: I1129 07:25:15.001956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-combined-ca-bundle\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:15 crc kubenswrapper[4828]: I1129 07:25:15.004991 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/346a1de9-a6d0-451f-8ca9-172d43dc99f9-internal-tls-certs\") pod \"barbican-api-cc498b8c4-hstck\" (UID: \"346a1de9-a6d0-451f-8ca9-172d43dc99f9\") " pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:15 crc kubenswrapper[4828]: I1129 07:25:15.094656 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:15 crc kubenswrapper[4828]: I1129 07:25:15.424739 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5072b7-b87f-4731-b7c5-80430f9d33a7" path="/var/lib/kubelet/pods/1d5072b7-b87f-4731-b7c5-80430f9d33a7/volumes" Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.669928 4828 generic.go:334] "Generic (PLEG): container finished" podID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerID="6bb681a84d5594ab812447577367b4ea5c008575e626da1f5041cc94703d12f0" exitCode=0 Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.670203 4828 generic.go:334] "Generic (PLEG): container finished" podID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerID="69be8d900beda449bd74c20e7d314976d464ffd7a1f337eaacf1e332ca0795c5" exitCode=143 Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.669970 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerDied","Data":"6bb681a84d5594ab812447577367b4ea5c008575e626da1f5041cc94703d12f0"} Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.670251 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerDied","Data":"69be8d900beda449bd74c20e7d314976d464ffd7a1f337eaacf1e332ca0795c5"} Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.672984 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:25:16 crc kubenswrapper[4828]: I1129 07:25:16.673856 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" Nov 29 07:25:17 crc kubenswrapper[4828]: I1129 07:25:17.949501 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.127344 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.155920 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.164085 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="dnsmasq-dns" containerID="cri-o://d0285cb48b7fee68afd4b7e46dd2e1c37c6a857c96473acc3179b48542a1e1b4" gracePeriod=10 Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.249022 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253566 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx72q\" (UniqueName: \"kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253665 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253888 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.253926 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id\") pod \"85c4ca87-22a9-405d-9c64-0e4863f53625\" (UID: \"85c4ca87-22a9-405d-9c64-0e4863f53625\") " Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.252904 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs" (OuterVolumeSpecName: "logs") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.260874 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.270896 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:25:18 crc kubenswrapper[4828]: E1129 07:25:18.271578 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.271617 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api" Nov 29 07:25:18 crc kubenswrapper[4828]: E1129 07:25:18.271639 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api-log" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.271648 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api-log" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.272004 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api-log" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.272035 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" containerName="cinder-api" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.277544 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.277763 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q" (OuterVolumeSpecName: "kube-api-access-qx72q") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "kube-api-access-qx72q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.277897 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts" (OuterVolumeSpecName: "scripts") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.291323 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.296140 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.296935 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.297340 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8vljh" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.310854 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.356919 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.356969 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtggh\" (UniqueName: \"kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357024 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357139 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357188 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c4ca87-22a9-405d-9c64-0e4863f53625-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357198 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357207 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx72q\" (UniqueName: \"kubernetes.io/projected/85c4ca87-22a9-405d-9c64-0e4863f53625-kube-api-access-qx72q\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357217 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.357225 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85c4ca87-22a9-405d-9c64-0e4863f53625-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.359474 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.444684 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data" (OuterVolumeSpecName: "config-data") pod "85c4ca87-22a9-405d-9c64-0e4863f53625" (UID: "85c4ca87-22a9-405d-9c64-0e4863f53625"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459482 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459614 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtggh\" (UniqueName: \"kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459657 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459783 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.459799 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c4ca87-22a9-405d-9c64-0e4863f53625-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.476311 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.493590 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.511074 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtggh\" (UniqueName: \"kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.520119 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle\") pod \"heat-engine-999b4d64b-9brmm\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.566340 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.567899 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.667813 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675325 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675389 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675432 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675471 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5km\" (UniqueName: \"kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675510 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.675555 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.702402 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.703682 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.710585 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.759584 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779035 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2hk5\" (UniqueName: \"kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779092 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779156 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779183 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779242 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5km\" (UniqueName: \"kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779312 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779376 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779438 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.779470 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.780225 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.780568 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.780701 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.780835 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.781473 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.792350 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.793648 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.801425 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.802920 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.814435 4828 generic.go:334] "Generic (PLEG): container finished" podID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerID="d0285cb48b7fee68afd4b7e46dd2e1c37c6a857c96473acc3179b48542a1e1b4" exitCode=0 Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.814551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" event={"ID":"dcb66a69-5eb2-4468-b7b9-beb16a814a76","Type":"ContainerDied","Data":"d0285cb48b7fee68afd4b7e46dd2e1c37c6a857c96473acc3179b48542a1e1b4"} Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.815759 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5km\" (UniqueName: \"kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km\") pod \"dnsmasq-dns-78d5585959-gnl5p\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.854155 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"85c4ca87-22a9-405d-9c64-0e4863f53625","Type":"ContainerDied","Data":"fe161ba1595f5e6becc0fb5f2a520348cd074fa5303ca6dcc04688d71daaa797"} Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.854668 4828 scope.go:117] "RemoveContainer" containerID="6bb681a84d5594ab812447577367b4ea5c008575e626da1f5041cc94703d12f0" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.854908 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.864997 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.866610 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.882963 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883101 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883153 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883183 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2hk5\" (UniqueName: \"kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883233 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9f88\" (UniqueName: \"kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883289 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883317 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.883361 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.900919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.912356 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2hk5\" (UniqueName: \"kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.915822 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.917488 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom\") pod \"heat-cfnapi-7c4d784bd9-s5pdk\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.984628 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.984772 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.984864 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9f88\" (UniqueName: \"kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.984948 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.992443 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:18 crc kubenswrapper[4828]: I1129 07:25:18.993896 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.001019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.013631 4828 scope.go:117] "RemoveContainer" containerID="69be8d900beda449bd74c20e7d314976d464ffd7a1f337eaacf1e332ca0795c5" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.014569 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.040237 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9f88\" (UniqueName: \"kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88\") pod \"heat-api-77df56fcb4-fs2h4\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.064559 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.127187 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.134564 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.144718 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.144980 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.145145 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.153007 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.174581 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cc498b8c4-hstck"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192336 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192376 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192452 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data-custom\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192497 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0856d1d8-20d9-4558-98fd-f955bbc00df7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192516 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-scripts\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x777t\" (UniqueName: \"kubernetes.io/projected/0856d1d8-20d9-4558-98fd-f955bbc00df7-kube-api-access-x777t\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192570 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192635 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0856d1d8-20d9-4558-98fd-f955bbc00df7-logs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.192748 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.225045 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:19 crc kubenswrapper[4828]: W1129 07:25:19.225863 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346a1de9_a6d0_451f_8ca9_172d43dc99f9.slice/crio-141ab910dd2cde6b954f523eac2d65be77990397c95900f4a65184a5a61c10a6 WatchSource:0}: Error finding container 141ab910dd2cde6b954f523eac2d65be77990397c95900f4a65184a5a61c10a6: Status 404 returned error can't find the container with id 141ab910dd2cde6b954f523eac2d65be77990397c95900f4a65184a5a61c10a6 Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.288631 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294536 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0856d1d8-20d9-4558-98fd-f955bbc00df7-logs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294719 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294765 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294794 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294850 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data-custom\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294894 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0856d1d8-20d9-4558-98fd-f955bbc00df7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294918 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-scripts\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.294943 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x777t\" (UniqueName: \"kubernetes.io/projected/0856d1d8-20d9-4558-98fd-f955bbc00df7-kube-api-access-x777t\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.299946 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0856d1d8-20d9-4558-98fd-f955bbc00df7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.304733 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0856d1d8-20d9-4558-98fd-f955bbc00df7-logs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.311646 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.313248 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.314054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.315312 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-config-data-custom\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.319026 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.337722 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0856d1d8-20d9-4558-98fd-f955bbc00df7-scripts\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.381366 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x777t\" (UniqueName: \"kubernetes.io/projected/0856d1d8-20d9-4558-98fd-f955bbc00df7-kube-api-access-x777t\") pod \"cinder-api-0\" (UID: \"0856d1d8-20d9-4558-98fd-f955bbc00df7\") " pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.478086 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c4ca87-22a9-405d-9c64-0e4863f53625" path="/var/lib/kubelet/pods/85c4ca87-22a9-405d-9c64-0e4863f53625/volumes" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.503892 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.684505 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.822897 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.823306 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.823380 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.823451 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.823479 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzw8\" (UniqueName: \"kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.823518 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb\") pod \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\" (UID: \"dcb66a69-5eb2-4468-b7b9-beb16a814a76\") " Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.875646 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8" (OuterVolumeSpecName: "kube-api-access-tgzw8") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "kube-api-access-tgzw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.930495 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzw8\" (UniqueName: \"kubernetes.io/projected/dcb66a69-5eb2-4468-b7b9-beb16a814a76-kube-api-access-tgzw8\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.937244 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" event={"ID":"a5b5741a-29b4-4c45-85c7-8c2cb55857a3","Type":"ContainerStarted","Data":"c1f49cc97f71ca4e76f93335c163b591ba297a424237e976296c10c7b4914c84"} Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.949539 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.954480 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.983579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerStarted","Data":"44d96d5a41cdd1a1ab37a5649ed560156f4145759b63f7b23022f35ff3623d35"} Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.983898 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-central-agent" containerID="cri-o://244a9c8b4f7001173670be40f0bf48981cb48e2bd361257e467dec696d0fe172" gracePeriod=30 Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.984068 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.984481 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="sg-core" containerID="cri-o://1aeb9826496db2985242d15abc52612d57415aa54fa17197b17965086df5031a" gracePeriod=30 Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.984579 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="proxy-httpd" containerID="cri-o://44d96d5a41cdd1a1ab37a5649ed560156f4145759b63f7b23022f35ff3623d35" gracePeriod=30 Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.984674 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-notification-agent" containerID="cri-o://56f318fdc3e557003e060a2de0fa919e123713829127f0f25af2df001dcbd79f" gracePeriod=30 Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.991663 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:25:19 crc kubenswrapper[4828]: I1129 07:25:19.993737 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.001875 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" event={"ID":"dcb66a69-5eb2-4468-b7b9-beb16a814a76","Type":"ContainerDied","Data":"142bfaa1c79c48774d04a0a6eaee4e6675c5f536650322935549a069b2d093ec"} Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.001990 4828 scope.go:117] "RemoveContainer" containerID="d0285cb48b7fee68afd4b7e46dd2e1c37c6a857c96473acc3179b48542a1e1b4" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.002293 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cwn7b" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.005725 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc498b8c4-hstck" event={"ID":"346a1de9-a6d0-451f-8ca9-172d43dc99f9","Type":"ContainerStarted","Data":"141ab910dd2cde6b954f523eac2d65be77990397c95900f4a65184a5a61c10a6"} Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.021773 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b9f79f95-xdwsq" event={"ID":"c9c61053-a1cc-4c19-9042-61c7e4cdaffe","Type":"ContainerStarted","Data":"30a393d95fce49134295a86f412d551791ba5c8138c7fff4601be9905d2eacbc"} Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.027938 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.032635 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.032670 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.032689 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.033293 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.532660748 podStartE2EDuration="27.033248256s" podCreationTimestamp="2025-11-29 07:24:53 +0000 UTC" firstStartedPulling="2025-11-29 07:24:54.842827441 +0000 UTC m=+1434.464903499" lastFinishedPulling="2025-11-29 07:25:18.343414949 +0000 UTC m=+1457.965491007" observedRunningTime="2025-11-29 07:25:20.015162031 +0000 UTC m=+1459.637238099" watchObservedRunningTime="2025-11-29 07:25:20.033248256 +0000 UTC m=+1459.655324314" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.060894 4828 scope.go:117] "RemoveContainer" containerID="4ed0e00170fde8fe7004be8b28332476b23b37697c0991c5e2bcf071281ba217" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.089474 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.134210 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.282719 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config" (OuterVolumeSpecName: "config") pod "dcb66a69-5eb2-4468-b7b9-beb16a814a76" (UID: "dcb66a69-5eb2-4468-b7b9-beb16a814a76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.338282 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb66a69-5eb2-4468-b7b9-beb16a814a76-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.677868 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.687255 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:20 crc kubenswrapper[4828]: I1129 07:25:20.756545 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.019383 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.029177 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cwn7b"] Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.058623 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b9f79f95-xdwsq" event={"ID":"c9c61053-a1cc-4c19-9042-61c7e4cdaffe","Type":"ContainerStarted","Data":"e78ebc5ecb44d788ed51f41537d2cfff7618992fced3eecabec6a5515cf41481"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.066131 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" event={"ID":"76228783-3735-4393-af2d-cd8ace3bd0aa","Type":"ContainerStarted","Data":"1b6f5fcb2815aa759d4d03a231e07efca676a0c09aa4f3a0cc474eb8b9f83826"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.096984 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d425ec7-4438-4994-b963-6a046f23934f" containerID="44d96d5a41cdd1a1ab37a5649ed560156f4145759b63f7b23022f35ff3623d35" exitCode=0 Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097027 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d425ec7-4438-4994-b963-6a046f23934f" containerID="1aeb9826496db2985242d15abc52612d57415aa54fa17197b17965086df5031a" exitCode=2 Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097038 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d425ec7-4438-4994-b963-6a046f23934f" containerID="56f318fdc3e557003e060a2de0fa919e123713829127f0f25af2df001dcbd79f" exitCode=0 Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097047 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d425ec7-4438-4994-b963-6a046f23934f" containerID="244a9c8b4f7001173670be40f0bf48981cb48e2bd361257e467dec696d0fe172" exitCode=0 Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097129 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerDied","Data":"44d96d5a41cdd1a1ab37a5649ed560156f4145759b63f7b23022f35ff3623d35"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097164 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerDied","Data":"1aeb9826496db2985242d15abc52612d57415aa54fa17197b17965086df5031a"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097178 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerDied","Data":"56f318fdc3e557003e060a2de0fa919e123713829127f0f25af2df001dcbd79f"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.097190 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerDied","Data":"244a9c8b4f7001173670be40f0bf48981cb48e2bd361257e467dec696d0fe172"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.144948 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57b9f79f95-xdwsq" podStartSLOduration=4.879827684 podStartE2EDuration="14.144919157s" podCreationTimestamp="2025-11-29 07:25:07 +0000 UTC" firstStartedPulling="2025-11-29 07:25:08.871318848 +0000 UTC m=+1448.493394906" lastFinishedPulling="2025-11-29 07:25:18.136410321 +0000 UTC m=+1457.758486379" observedRunningTime="2025-11-29 07:25:21.090123809 +0000 UTC m=+1460.712199887" watchObservedRunningTime="2025-11-29 07:25:21.144919157 +0000 UTC m=+1460.766995215" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.151337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" event={"ID":"4852ae69-6066-464b-9934-604b2b5ae8a4","Type":"ContainerStarted","Data":"576b8052aae0391a8d1df4f0c8a80ecf875bd851ed96d5afab22b41c6d257ccf"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.198970 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerStarted","Data":"3e94638de42e8008640ce6f10ef811ec789a580c48f4c396f453977edd15f70f"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.216934 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" event={"ID":"a5b5741a-29b4-4c45-85c7-8c2cb55857a3","Type":"ContainerStarted","Data":"f70ddc64d575b04451db811a0eb4688ae2ccdcfad5126c72502d25f2472a96ba"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.265588 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.655512344 podStartE2EDuration="15.265569177s" podCreationTimestamp="2025-11-29 07:25:06 +0000 UTC" firstStartedPulling="2025-11-29 07:25:08.530183663 +0000 UTC m=+1448.152259721" lastFinishedPulling="2025-11-29 07:25:13.140240496 +0000 UTC m=+1452.762316554" observedRunningTime="2025-11-29 07:25:21.242148105 +0000 UTC m=+1460.864224183" watchObservedRunningTime="2025-11-29 07:25:21.265569177 +0000 UTC m=+1460.887645235" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.269150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77df56fcb4-fs2h4" event={"ID":"1227653a-94b6-4867-b24a-3a6e70f62d3b","Type":"ContainerStarted","Data":"d0ec688f5d2aec85c400c62dadb3c787bc9d3c60c771eb37c53d4095c069e0ce"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.273964 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0856d1d8-20d9-4558-98fd-f955bbc00df7","Type":"ContainerStarted","Data":"c4d9af03b24b85ac02d54e30e4752cf6b72bcf38fc5dcfba46ba3c2596c13485"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.301723 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc498b8c4-hstck" event={"ID":"346a1de9-a6d0-451f-8ca9-172d43dc99f9","Type":"ContainerStarted","Data":"33e3e1e0bf3631c1218265b9a6c006118411a3b051857c2c66697ae5902f9f71"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.313458 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-69488889b8-dcf7m" podStartSLOduration=7.080658828 podStartE2EDuration="14.313425286s" podCreationTimestamp="2025-11-29 07:25:07 +0000 UTC" firstStartedPulling="2025-11-29 07:25:11.101486296 +0000 UTC m=+1450.723562354" lastFinishedPulling="2025-11-29 07:25:18.334252754 +0000 UTC m=+1457.956328812" observedRunningTime="2025-11-29 07:25:21.294686695 +0000 UTC m=+1460.916762753" watchObservedRunningTime="2025-11-29 07:25:21.313425286 +0000 UTC m=+1460.935501354" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.322473 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-999b4d64b-9brmm" event={"ID":"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc","Type":"ContainerStarted","Data":"1410b5797a1cd7dc270dc68394ef441f131ae5d09eaadea1558b55a6b516d305"} Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.454983 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" path="/var/lib/kubelet/pods/dcb66a69-5eb2-4468-b7b9-beb16a814a76/volumes" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.741742 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.791081 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxqcr\" (UniqueName: \"kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796408 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796517 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796546 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796619 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796645 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.796723 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data\") pod \"5d425ec7-4438-4994-b963-6a046f23934f\" (UID: \"5d425ec7-4438-4994-b963-6a046f23934f\") " Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.797996 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.801637 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.831344 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr" (OuterVolumeSpecName: "kube-api-access-mxqcr") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "kube-api-access-mxqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.833910 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts" (OuterVolumeSpecName: "scripts") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.899364 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.899408 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.899420 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d425ec7-4438-4994-b963-6a046f23934f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.899429 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxqcr\" (UniqueName: \"kubernetes.io/projected/5d425ec7-4438-4994-b963-6a046f23934f-kube-api-access-mxqcr\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:21 crc kubenswrapper[4828]: I1129 07:25:21.923456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.000987 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.027110 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.062633 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.067548 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.154:8080/\": dial tcp 10.217.0.154:8080: connect: connection refused" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.103598 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.123464 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data" (OuterVolumeSpecName: "config-data") pod "5d425ec7-4438-4994-b963-6a046f23934f" (UID: "5d425ec7-4438-4994-b963-6a046f23934f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.206029 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d425ec7-4438-4994-b963-6a046f23934f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.222110 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.287758 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.356512 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-999b4d64b-9brmm" event={"ID":"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc","Type":"ContainerStarted","Data":"95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508"} Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.358145 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.373694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0856d1d8-20d9-4558-98fd-f955bbc00df7","Type":"ContainerStarted","Data":"a5a2278dc126427906acc77796ec91b46615d90098503e2309a914d01cc2fd7d"} Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.391751 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d425ec7-4438-4994-b963-6a046f23934f","Type":"ContainerDied","Data":"a7b7f7e24e795c22157f144e7e3168f4980f386f76860860f4c6b699da44c20a"} Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.391818 4828 scope.go:117] "RemoveContainer" containerID="44d96d5a41cdd1a1ab37a5649ed560156f4145759b63f7b23022f35ff3623d35" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.391998 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.421765 4828 generic.go:334] "Generic (PLEG): container finished" podID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerID="d43c2d7a14092bf1d008745d1d65da292bed822ed50ba4c4f1dfe2fd4f1e9a6b" exitCode=0 Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.421852 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" event={"ID":"4852ae69-6066-464b-9934-604b2b5ae8a4","Type":"ContainerDied","Data":"d43c2d7a14092bf1d008745d1d65da292bed822ed50ba4c4f1dfe2fd4f1e9a6b"} Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.434913 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-999b4d64b-9brmm" podStartSLOduration=4.43489215 podStartE2EDuration="4.43489215s" podCreationTimestamp="2025-11-29 07:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:22.400576688 +0000 UTC m=+1462.022652766" watchObservedRunningTime="2025-11-29 07:25:22.43489215 +0000 UTC m=+1462.056968218" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.459569 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc498b8c4-hstck" event={"ID":"346a1de9-a6d0-451f-8ca9-172d43dc99f9","Type":"ContainerStarted","Data":"171d86e2661891a8dd539d3071644c70f806270b9c27e4c87ed885c2cc991fe9"} Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.459899 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.462033 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.561845 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.562828 4828 scope.go:117] "RemoveContainer" containerID="1aeb9826496db2985242d15abc52612d57415aa54fa17197b17965086df5031a" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.615486 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.628662 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-cc498b8c4-hstck" podStartSLOduration=8.628637348 podStartE2EDuration="8.628637348s" podCreationTimestamp="2025-11-29 07:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:22.5337569 +0000 UTC m=+1462.155832968" watchObservedRunningTime="2025-11-29 07:25:22.628637348 +0000 UTC m=+1462.250713406" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.713516 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714024 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="sg-core" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714048 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="sg-core" Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714066 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="proxy-httpd" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714075 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="proxy-httpd" Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714098 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-notification-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714106 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-notification-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714136 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="dnsmasq-dns" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714143 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="dnsmasq-dns" Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714154 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="init" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714162 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="init" Nov 29 07:25:22 crc kubenswrapper[4828]: E1129 07:25:22.714173 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-central-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714181 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-central-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714486 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-notification-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714507 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="sg-core" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714526 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb66a69-5eb2-4468-b7b9-beb16a814a76" containerName="dnsmasq-dns" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714537 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="proxy-httpd" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.714551 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d425ec7-4438-4994-b963-6a046f23934f" containerName="ceilometer-central-agent" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.716719 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.722818 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.723149 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744539 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744599 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744671 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744689 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744743 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.744771 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfhl\" (UniqueName: \"kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.752046 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.771436 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.851642 4828 scope.go:117] "RemoveContainer" containerID="56f318fdc3e557003e060a2de0fa919e123713829127f0f25af2df001dcbd79f" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.853734 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.853805 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.853878 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.853944 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.853965 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.854011 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.854036 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfhl\" (UniqueName: \"kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.854848 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.855196 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.880217 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.900052 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.909167 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfhl\" (UniqueName: \"kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.909638 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.927229 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts\") pod \"ceilometer-0\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " pod="openstack/ceilometer-0" Nov 29 07:25:22 crc kubenswrapper[4828]: I1129 07:25:22.989947 4828 scope.go:117] "RemoveContainer" containerID="244a9c8b4f7001173670be40f0bf48981cb48e2bd361257e467dec696d0fe172" Nov 29 07:25:23 crc kubenswrapper[4828]: I1129 07:25:23.131873 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:23 crc kubenswrapper[4828]: I1129 07:25:23.441807 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d425ec7-4438-4994-b963-6a046f23934f" path="/var/lib/kubelet/pods/5d425ec7-4438-4994-b963-6a046f23934f/volumes" Nov 29 07:25:23 crc kubenswrapper[4828]: I1129 07:25:23.712703 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:23 crc kubenswrapper[4828]: W1129 07:25:23.760560 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e543180_ec99_4502_9722_5a819aad79d7.slice/crio-09d85362eee5408ae14fa186d49418d792192b1fa33b4e9c35ba3935217edc8a WatchSource:0}: Error finding container 09d85362eee5408ae14fa186d49418d792192b1fa33b4e9c35ba3935217edc8a: Status 404 returned error can't find the container with id 09d85362eee5408ae14fa186d49418d792192b1fa33b4e9c35ba3935217edc8a Nov 29 07:25:24 crc kubenswrapper[4828]: I1129 07:25:24.516544 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerStarted","Data":"09d85362eee5408ae14fa186d49418d792192b1fa33b4e9c35ba3935217edc8a"} Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.530580 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0856d1d8-20d9-4558-98fd-f955bbc00df7","Type":"ContainerStarted","Data":"271ef13078d2cf29bb418e63ad594b4640514646e75c4dbfbf0b4e706a0a2b8b"} Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.531736 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.541015 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" event={"ID":"4852ae69-6066-464b-9934-604b2b5ae8a4","Type":"ContainerStarted","Data":"c54fefd56f9a67404a803deaeb56ff92fed6cb4c1bd9455d529651ed7ace016a"} Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.542081 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.561744 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.561719076 podStartE2EDuration="7.561719076s" podCreationTimestamp="2025-11-29 07:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:25.556359229 +0000 UTC m=+1465.178435287" watchObservedRunningTime="2025-11-29 07:25:25.561719076 +0000 UTC m=+1465.183795134" Nov 29 07:25:25 crc kubenswrapper[4828]: I1129 07:25:25.588394 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" podStartSLOduration=7.588366331 podStartE2EDuration="7.588366331s" podCreationTimestamp="2025-11-29 07:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:25.578851017 +0000 UTC m=+1465.200927085" watchObservedRunningTime="2025-11-29 07:25:25.588366331 +0000 UTC m=+1465.210442409" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.042609 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.126867 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.310953 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.390858 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.549825 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-fd957fd8c-nfdrx"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.551659 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.558999 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="cinder-scheduler" containerID="cri-o://bbf056d07eb70302ab25f7ae4190b7f5d5a90e65497f461202b1aa5c290f8cd0" gracePeriod=30 Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.559339 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="probe" containerID="cri-o://3e94638de42e8008640ce6f10ef811ec789a580c48f4c396f453977edd15f70f" gracePeriod=30 Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.582618 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwcj7\" (UniqueName: \"kubernetes.io/projected/65ec8661-f29c-455c-b0b6-04aaaad39bda-kube-api-access-bwcj7\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.582755 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data-custom\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.582934 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-combined-ca-bundle\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.583009 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.583938 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.585196 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.620235 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.621767 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.647117 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fd957fd8c-nfdrx"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.664460 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.683341 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684603 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684659 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vbx\" (UniqueName: \"kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684705 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data-custom\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684803 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvtx\" (UniqueName: \"kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684869 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684927 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-combined-ca-bundle\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.684955 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.685004 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.685077 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwcj7\" (UniqueName: \"kubernetes.io/projected/65ec8661-f29c-455c-b0b6-04aaaad39bda-kube-api-access-bwcj7\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.685142 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.685187 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.685252 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.694019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data-custom\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.695820 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-combined-ca-bundle\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.715216 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ec8661-f29c-455c-b0b6-04aaaad39bda-config-data\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.766821 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwcj7\" (UniqueName: \"kubernetes.io/projected/65ec8661-f29c-455c-b0b6-04aaaad39bda-kube-api-access-bwcj7\") pod \"heat-engine-fd957fd8c-nfdrx\" (UID: \"65ec8661-f29c-455c-b0b6-04aaaad39bda\") " pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.787241 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xvtx\" (UniqueName: \"kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.787696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.787827 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.787963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.788062 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.788141 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.788245 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.788368 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vbx\" (UniqueName: \"kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.802616 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.805407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.806118 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.810398 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.811410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.824456 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.824710 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vbx\" (UniqueName: \"kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx\") pod \"heat-cfnapi-57744bffdb-m2ffz\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.834300 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xvtx\" (UniqueName: \"kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx\") pod \"heat-api-654c45c88d-sbsls\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.899955 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.924786 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:27 crc kubenswrapper[4828]: I1129 07:25:27.952791 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.231358 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.239903 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.240028 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.331231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.331586 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w865x\" (UniqueName: \"kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.331686 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.433824 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.435828 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w865x\" (UniqueName: \"kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.435977 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.436593 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.436940 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.462717 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w865x\" (UniqueName: \"kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x\") pod \"redhat-operators-2vrls\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.472160 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.472248 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cc498b8c4-hstck" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.555175 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.555452 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8c6f8f658-jqjcb" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api-log" containerID="cri-o://56b7194e30ca951f8c1038ba9c1065b89eba6bbe44a5b3d4b6b6b73a84d2657f" gracePeriod=30 Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.555586 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8c6f8f658-jqjcb" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api" containerID="cri-o://9e747e5ca0b0a38acd3319c454970b7bf9a3262a489221ed8aff488f76ab563f" gracePeriod=30 Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.638920 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-654c45c88d-sbsls" event={"ID":"992b3577-23a8-4d07-8826-821fce571ebd","Type":"ContainerStarted","Data":"565ae8646a20ead5bd7654650ef3e0deb9d0c379e6d38eb3ed8fa66336bb5dd3"} Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.648013 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.659782 4828 generic.go:334] "Generic (PLEG): container finished" podID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerID="3e94638de42e8008640ce6f10ef811ec789a580c48f4c396f453977edd15f70f" exitCode=0 Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.659838 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerDied","Data":"3e94638de42e8008640ce6f10ef811ec789a580c48f4c396f453977edd15f70f"} Nov 29 07:25:29 crc kubenswrapper[4828]: I1129 07:25:29.883151 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:29 crc kubenswrapper[4828]: W1129 07:25:29.924135 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b0fed58_e5bc_453b_9918_5d1a44dcf00d.slice/crio-ccef3b9f9be59e65535a54b0046334111e6f68e0bae521ac8724ba0dba28274f WatchSource:0}: Error finding container ccef3b9f9be59e65535a54b0046334111e6f68e0bae521ac8724ba0dba28274f: Status 404 returned error can't find the container with id ccef3b9f9be59e65535a54b0046334111e6f68e0bae521ac8724ba0dba28274f Nov 29 07:25:30 crc kubenswrapper[4828]: W1129 07:25:30.218322 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65ec8661_f29c_455c_b0b6_04aaaad39bda.slice/crio-31669252e193801fa52f8cecf9a0abb8c211b947c71abd340931864a5173b5c6 WatchSource:0}: Error finding container 31669252e193801fa52f8cecf9a0abb8c211b947c71abd340931864a5173b5c6: Status 404 returned error can't find the container with id 31669252e193801fa52f8cecf9a0abb8c211b947c71abd340931864a5173b5c6 Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.219547 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fd957fd8c-nfdrx"] Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.515681 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.747588 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" event={"ID":"76228783-3735-4393-af2d-cd8ace3bd0aa","Type":"ContainerStarted","Data":"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.749184 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.768899 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd957fd8c-nfdrx" event={"ID":"65ec8661-f29c-455c-b0b6-04aaaad39bda","Type":"ContainerStarted","Data":"31669252e193801fa52f8cecf9a0abb8c211b947c71abd340931864a5173b5c6"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.776804 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podStartSLOduration=4.236064802 podStartE2EDuration="12.776783735s" podCreationTimestamp="2025-11-29 07:25:18 +0000 UTC" firstStartedPulling="2025-11-29 07:25:20.762734288 +0000 UTC m=+1460.384810346" lastFinishedPulling="2025-11-29 07:25:29.303453211 +0000 UTC m=+1468.925529279" observedRunningTime="2025-11-29 07:25:30.776538608 +0000 UTC m=+1470.398614676" watchObservedRunningTime="2025-11-29 07:25:30.776783735 +0000 UTC m=+1470.398859793" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.834718 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77df56fcb4-fs2h4" event={"ID":"1227653a-94b6-4867-b24a-3a6e70f62d3b","Type":"ContainerStarted","Data":"20d68f35c6e76b6d0e3298e6411d717c0194a1d6c78fd8840722ae9a632611ab"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.835119 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.859217 4828 generic.go:334] "Generic (PLEG): container finished" podID="18bf2da4-1500-4545-b55a-2a629614b238" containerID="56b7194e30ca951f8c1038ba9c1065b89eba6bbe44a5b3d4b6b6b73a84d2657f" exitCode=143 Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.859392 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerDied","Data":"56b7194e30ca951f8c1038ba9c1065b89eba6bbe44a5b3d4b6b6b73a84d2657f"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.872779 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerStarted","Data":"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.880171 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerStarted","Data":"f8c8c34b4643417500b407f433bbf573e604952810ff4b283e8c8d34c1da1a63"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.882106 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerStarted","Data":"065db557e2b177922977373d8e79e135518cd0a8a9d6f564925c4f2d30320b5e"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.882177 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerStarted","Data":"ccef3b9f9be59e65535a54b0046334111e6f68e0bae521ac8724ba0dba28274f"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.883517 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.875249 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-77df56fcb4-fs2h4" podStartSLOduration=4.39635639 podStartE2EDuration="12.875225904s" podCreationTimestamp="2025-11-29 07:25:18 +0000 UTC" firstStartedPulling="2025-11-29 07:25:20.757281438 +0000 UTC m=+1460.379357496" lastFinishedPulling="2025-11-29 07:25:29.236150952 +0000 UTC m=+1468.858227010" observedRunningTime="2025-11-29 07:25:30.857286033 +0000 UTC m=+1470.479362101" watchObservedRunningTime="2025-11-29 07:25:30.875225904 +0000 UTC m=+1470.497301962" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.912562 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-654c45c88d-sbsls" event={"ID":"992b3577-23a8-4d07-8826-821fce571ebd","Type":"ContainerStarted","Data":"59a6605c5c67167388a9bc149919ccee40a5aade9409e8d503dccc78fdc75d9f"} Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.913592 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:30 crc kubenswrapper[4828]: I1129 07:25:30.966980 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" podStartSLOduration=3.96692611 podStartE2EDuration="3.96692611s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:30.950821726 +0000 UTC m=+1470.572897784" watchObservedRunningTime="2025-11-29 07:25:30.96692611 +0000 UTC m=+1470.589002158" Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.566802 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-654c45c88d-sbsls" podStartSLOduration=4.566775331 podStartE2EDuration="4.566775331s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:31.023611927 +0000 UTC m=+1470.645687985" watchObservedRunningTime="2025-11-29 07:25:31.566775331 +0000 UTC m=+1471.188851389" Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.934088 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd957fd8c-nfdrx" event={"ID":"65ec8661-f29c-455c-b0b6-04aaaad39bda","Type":"ContainerStarted","Data":"15859582671d82be9a9d8569df20280bbfa1ee765a985dc08dd4dde4023a8ca1"} Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.934307 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.936300 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerStarted","Data":"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a"} Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.938141 4828 generic.go:334] "Generic (PLEG): container finished" podID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerID="444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c" exitCode=0 Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.938223 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerDied","Data":"444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c"} Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.939881 4828 generic.go:334] "Generic (PLEG): container finished" podID="992b3577-23a8-4d07-8826-821fce571ebd" containerID="59a6605c5c67167388a9bc149919ccee40a5aade9409e8d503dccc78fdc75d9f" exitCode=1 Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.940220 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-654c45c88d-sbsls" event={"ID":"992b3577-23a8-4d07-8826-821fce571ebd","Type":"ContainerDied","Data":"59a6605c5c67167388a9bc149919ccee40a5aade9409e8d503dccc78fdc75d9f"} Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.940667 4828 scope.go:117] "RemoveContainer" containerID="59a6605c5c67167388a9bc149919ccee40a5aade9409e8d503dccc78fdc75d9f" Nov 29 07:25:31 crc kubenswrapper[4828]: I1129 07:25:31.976703 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-fd957fd8c-nfdrx" podStartSLOduration=4.976684753 podStartE2EDuration="4.976684753s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:31.958103035 +0000 UTC m=+1471.580179103" watchObservedRunningTime="2025-11-29 07:25:31.976684753 +0000 UTC m=+1471.598760821" Nov 29 07:25:32 crc kubenswrapper[4828]: I1129 07:25:32.951562 4828 generic.go:334] "Generic (PLEG): container finished" podID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerID="065db557e2b177922977373d8e79e135518cd0a8a9d6f564925c4f2d30320b5e" exitCode=1 Nov 29 07:25:32 crc kubenswrapper[4828]: I1129 07:25:32.951655 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerDied","Data":"065db557e2b177922977373d8e79e135518cd0a8a9d6f564925c4f2d30320b5e"} Nov 29 07:25:32 crc kubenswrapper[4828]: I1129 07:25:32.952852 4828 scope.go:117] "RemoveContainer" containerID="065db557e2b177922977373d8e79e135518cd0a8a9d6f564925c4f2d30320b5e" Nov 29 07:25:32 crc kubenswrapper[4828]: I1129 07:25:32.953376 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.029149 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8c6f8f658-jqjcb" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.029168 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8c6f8f658-jqjcb" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.868527 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.949315 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.949562 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="dnsmasq-dns" containerID="cri-o://aedb6d85ce604669aa517d5865251356c75afe5dc5f3e805eed2e3b871c99e6a" gracePeriod=10 Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.985655 4828 generic.go:334] "Generic (PLEG): container finished" podID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerID="bbf056d07eb70302ab25f7ae4190b7f5d5a90e65497f461202b1aa5c290f8cd0" exitCode=0 Nov 29 07:25:33 crc kubenswrapper[4828]: I1129 07:25:33.985748 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerDied","Data":"bbf056d07eb70302ab25f7ae4190b7f5d5a90e65497f461202b1aa5c290f8cd0"} Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.007452 4828 generic.go:334] "Generic (PLEG): container finished" podID="18bf2da4-1500-4545-b55a-2a629614b238" containerID="9e747e5ca0b0a38acd3319c454970b7bf9a3262a489221ed8aff488f76ab563f" exitCode=0 Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.007536 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerDied","Data":"9e747e5ca0b0a38acd3319c454970b7bf9a3262a489221ed8aff488f76ab563f"} Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.015994 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerStarted","Data":"fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b"} Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.018213 4828 generic.go:334] "Generic (PLEG): container finished" podID="992b3577-23a8-4d07-8826-821fce571ebd" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" exitCode=1 Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.018256 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-654c45c88d-sbsls" event={"ID":"992b3577-23a8-4d07-8826-821fce571ebd","Type":"ContainerDied","Data":"8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9"} Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.018303 4828 scope.go:117] "RemoveContainer" containerID="59a6605c5c67167388a9bc149919ccee40a5aade9409e8d503dccc78fdc75d9f" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.018912 4828 scope.go:117] "RemoveContainer" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" Nov 29 07:25:34 crc kubenswrapper[4828]: E1129 07:25:34.019187 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-654c45c88d-sbsls_openstack(992b3577-23a8-4d07-8826-821fce571ebd)\"" pod="openstack/heat-api-654c45c88d-sbsls" podUID="992b3577-23a8-4d07-8826-821fce571ebd" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.530516 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.635772 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom\") pod \"18bf2da4-1500-4545-b55a-2a629614b238\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.635846 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data\") pod \"18bf2da4-1500-4545-b55a-2a629614b238\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.635884 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs\") pod \"18bf2da4-1500-4545-b55a-2a629614b238\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.635929 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gfmg\" (UniqueName: \"kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg\") pod \"18bf2da4-1500-4545-b55a-2a629614b238\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.636003 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle\") pod \"18bf2da4-1500-4545-b55a-2a629614b238\" (UID: \"18bf2da4-1500-4545-b55a-2a629614b238\") " Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.643710 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs" (OuterVolumeSpecName: "logs") pod "18bf2da4-1500-4545-b55a-2a629614b238" (UID: "18bf2da4-1500-4545-b55a-2a629614b238"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.656075 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg" (OuterVolumeSpecName: "kube-api-access-5gfmg") pod "18bf2da4-1500-4545-b55a-2a629614b238" (UID: "18bf2da4-1500-4545-b55a-2a629614b238"). InnerVolumeSpecName "kube-api-access-5gfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.761080 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "18bf2da4-1500-4545-b55a-2a629614b238" (UID: "18bf2da4-1500-4545-b55a-2a629614b238"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.778844 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bf2da4-1500-4545-b55a-2a629614b238-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.778891 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gfmg\" (UniqueName: \"kubernetes.io/projected/18bf2da4-1500-4545-b55a-2a629614b238-kube-api-access-5gfmg\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.778901 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.868526 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18bf2da4-1500-4545-b55a-2a629614b238" (UID: "18bf2da4-1500-4545-b55a-2a629614b238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.886792 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.928586 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data" (OuterVolumeSpecName: "config-data") pod "18bf2da4-1500-4545-b55a-2a629614b238" (UID: "18bf2da4-1500-4545-b55a-2a629614b238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:34 crc kubenswrapper[4828]: I1129 07:25:34.988622 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bf2da4-1500-4545-b55a-2a629614b238-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.028454 4828 scope.go:117] "RemoveContainer" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" Nov 29 07:25:35 crc kubenswrapper[4828]: E1129 07:25:35.028767 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-654c45c88d-sbsls_openstack(992b3577-23a8-4d07-8826-821fce571ebd)\"" pod="openstack/heat-api-654c45c88d-sbsls" podUID="992b3577-23a8-4d07-8826-821fce571ebd" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.029137 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8c6f8f658-jqjcb" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.031487 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8c6f8f658-jqjcb" event={"ID":"18bf2da4-1500-4545-b55a-2a629614b238","Type":"ContainerDied","Data":"0bd9085cb9c5b25986847480b40d7dfc3dab9aa8b383c4c059c5ee31a5aad8d9"} Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.031749 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.135721 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.144644 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8c6f8f658-jqjcb"] Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.424572 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18bf2da4-1500-4545-b55a-2a629614b238" path="/var/lib/kubelet/pods/18bf2da4-1500-4545-b55a-2a629614b238/volumes" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.697358 4828 scope.go:117] "RemoveContainer" containerID="9e747e5ca0b0a38acd3319c454970b7bf9a3262a489221ed8aff488f76ab563f" Nov 29 07:25:35 crc kubenswrapper[4828]: I1129 07:25:35.721006 4828 scope.go:117] "RemoveContainer" containerID="56b7194e30ca951f8c1038ba9c1065b89eba6bbe44a5b3d4b6b6b73a84d2657f" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.054751 4828 generic.go:334] "Generic (PLEG): container finished" podID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerID="aedb6d85ce604669aa517d5865251356c75afe5dc5f3e805eed2e3b871c99e6a" exitCode=0 Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.055594 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" event={"ID":"b7fa3104-0c77-4894-98bd-ecc7ab46c914","Type":"ContainerDied","Data":"aedb6d85ce604669aa517d5865251356c75afe5dc5f3e805eed2e3b871c99e6a"} Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.063864 4828 generic.go:334] "Generic (PLEG): container finished" podID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerID="fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b" exitCode=1 Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.063910 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerDied","Data":"fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b"} Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.063946 4828 scope.go:117] "RemoveContainer" containerID="065db557e2b177922977373d8e79e135518cd0a8a9d6f564925c4f2d30320b5e" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.064604 4828 scope.go:117] "RemoveContainer" containerID="fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b" Nov 29 07:25:36 crc kubenswrapper[4828]: E1129 07:25:36.064951 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57744bffdb-m2ffz_openstack(0b0fed58-e5bc-453b-9918-5d1a44dcf00d)\"" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.171032 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.267036 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.267247 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-77df56fcb4-fs2h4" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" containerID="cri-o://20d68f35c6e76b6d0e3298e6411d717c0194a1d6c78fd8840722ae9a632611ab" gracePeriod=60 Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284473 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284735 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284760 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsj7\" (UniqueName: \"kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284866 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.284904 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle\") pod \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\" (UID: \"d17b2e97-00d7-47ba-8b5c-c911a171bd27\") " Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.294212 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.306925 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7" (OuterVolumeSpecName: "kube-api-access-ktsj7") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "kube-api-access-ktsj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.305482 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.308722 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" containerID="cri-o://89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1" gracePeriod=60 Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.326572 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.326629 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts" (OuterVolumeSpecName: "scripts") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.329700 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.164:8000/healthcheck\": EOF" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.340957 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59fbdb74df-c54jw"] Nov 29 07:25:36 crc kubenswrapper[4828]: E1129 07:25:36.341365 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341382 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api" Nov 29 07:25:36 crc kubenswrapper[4828]: E1129 07:25:36.341411 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api-log" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341419 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api-log" Nov 29 07:25:36 crc kubenswrapper[4828]: E1129 07:25:36.341431 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="cinder-scheduler" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341437 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="cinder-scheduler" Nov 29 07:25:36 crc kubenswrapper[4828]: E1129 07:25:36.341453 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="probe" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341459 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="probe" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341623 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="probe" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341639 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api-log" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341648 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" containerName="cinder-scheduler" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.341658 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bf2da4-1500-4545-b55a-2a629614b238" containerName="barbican-api" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.352213 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.353153 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.164:8000/healthcheck\": EOF" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.353299 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.164:8000/healthcheck\": EOF" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.361922 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.362181 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.452777 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59fbdb74df-c54jw"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.457918 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.457969 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-combined-ca-bundle\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.457989 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-internal-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4kx\" (UniqueName: \"kubernetes.io/projected/930ded64-8acc-4fc6-b729-034214fa160b-kube-api-access-7w4kx\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458071 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data-custom\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-public-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458162 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458172 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d17b2e97-00d7-47ba-8b5c-c911a171bd27-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458182 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.458191 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsj7\" (UniqueName: \"kubernetes.io/projected/d17b2e97-00d7-47ba-8b5c-c911a171bd27-kube-api-access-ktsj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.490561 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f579788cb-tbwlt"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.492555 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.497730 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.505129 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.519593 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f579788cb-tbwlt"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.559377 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.579108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-combined-ca-bundle\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.579153 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-internal-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.579261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data-custom\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.579292 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4kx\" (UniqueName: \"kubernetes.io/projected/930ded64-8acc-4fc6-b729-034214fa160b-kube-api-access-7w4kx\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.579326 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-public-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.588243 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.589192 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-public-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.594974 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-combined-ca-bundle\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.598983 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-internal-tls-certs\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.614108 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ded64-8acc-4fc6-b729-034214fa160b-config-data-custom\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.627491 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.651201 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4kx\" (UniqueName: \"kubernetes.io/projected/930ded64-8acc-4fc6-b729-034214fa160b-kube-api-access-7w4kx\") pod \"heat-api-59fbdb74df-c54jw\" (UID: \"930ded64-8acc-4fc6-b729-034214fa160b\") " pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.685852 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data-custom\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.685901 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.685939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-internal-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.685986 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-public-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.686032 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-combined-ca-bundle\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.686198 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjc7j\" (UniqueName: \"kubernetes.io/projected/1cb551ca-3225-4ed7-9127-04f6a4abe792-kube-api-access-qjc7j\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.686306 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.729934 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mqsbn"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.731698 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.757520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.764670 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqsbn"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.781586 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8p6dr"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.783200 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794592 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjc7j\" (UniqueName: \"kubernetes.io/projected/1cb551ca-3225-4ed7-9127-04f6a4abe792-kube-api-access-qjc7j\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794661 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data-custom\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794682 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794708 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-internal-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794731 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-public-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.794760 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-combined-ca-bundle\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.807580 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-public-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.809570 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-internal-tls-certs\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.824827 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data-custom\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.824992 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-combined-ca-bundle\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.826830 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb551ca-3225-4ed7-9127-04f6a4abe792-config-data\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.843754 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-05c6-account-create-update-v2dls"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.856895 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjc7j\" (UniqueName: \"kubernetes.io/projected/1cb551ca-3225-4ed7-9127-04f6a4abe792-kube-api-access-qjc7j\") pod \"heat-cfnapi-7f579788cb-tbwlt\" (UID: \"1cb551ca-3225-4ed7-9127-04f6a4abe792\") " pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.859346 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8p6dr"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.859446 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.868584 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-05c6-account-create-update-v2dls"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.870503 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.890026 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kqxf5"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.891465 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.897026 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcrl\" (UniqueName: \"kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.897083 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.897113 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtjq\" (UniqueName: \"kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.897365 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.917179 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kqxf5"] Nov 29 07:25:36 crc kubenswrapper[4828]: I1129 07:25:36.950635 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data" (OuterVolumeSpecName: "config-data") pod "d17b2e97-00d7-47ba-8b5c-c911a171bd27" (UID: "d17b2e97-00d7-47ba-8b5c-c911a171bd27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.022818 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-854f-account-create-update-ftz6n"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.026060 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcrl\" (UniqueName: \"kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070345 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070367 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtjq\" (UniqueName: \"kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070418 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxwt\" (UniqueName: \"kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070629 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070678 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070721 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klxt\" (UniqueName: \"kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070851 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.070939 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17b2e97-00d7-47ba-8b5c-c911a171bd27-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.072041 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.072799 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.074435 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.076894 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.094795 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcrl\" (UniqueName: \"kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl\") pod \"nova-cell0-db-create-8p6dr\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.116230 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtjq\" (UniqueName: \"kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq\") pod \"nova-api-db-create-mqsbn\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.146493 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.162433 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-854f-account-create-update-ftz6n"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.163134 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d17b2e97-00d7-47ba-8b5c-c911a171bd27","Type":"ContainerDied","Data":"b713118ffa590e0d0d37a1eaf6ba7bd755dd036c9883fe5a57a7de915cbe3cc6"} Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.163500 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.164297 4828 scope.go:117] "RemoveContainer" containerID="fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b" Nov 29 07:25:37 crc kubenswrapper[4828]: E1129 07:25:37.164696 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57744bffdb-m2ffz_openstack(0b0fed58-e5bc-453b-9918-5d1a44dcf00d)\"" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.190931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxwt\" (UniqueName: \"kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.191368 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvztb\" (UniqueName: \"kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.191426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.191463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klxt\" (UniqueName: \"kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.191529 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.191557 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.192261 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.193461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.213777 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klxt\" (UniqueName: \"kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt\") pod \"nova-api-05c6-account-create-update-v2dls\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.213871 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxwt\" (UniqueName: \"kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt\") pod \"nova-cell1-db-create-kqxf5\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.233395 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-77df56fcb4-fs2h4" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.165:8004/healthcheck\": read tcp 10.217.0.2:37600->10.217.0.165:8004: read: connection reset by peer" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.234362 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.256875 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1c66-account-create-update-ptpql"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.265962 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.269177 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.270310 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1c66-account-create-update-ptpql"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.270512 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.295131 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.296541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlrj\" (UniqueName: \"kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.296610 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvztb\" (UniqueName: \"kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.296718 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.296798 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.297468 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.307543 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.313721 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvztb\" (UniqueName: \"kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb\") pod \"nova-cell0-854f-account-create-update-ftz6n\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.315692 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.317671 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.322643 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.331864 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.398838 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399417 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-scripts\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399510 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399588 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlrj\" (UniqueName: \"kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399699 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9v7\" (UniqueName: \"kubernetes.io/projected/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-kube-api-access-nl9v7\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.399756 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.400788 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.416537 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.418140 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlrj\" (UniqueName: \"kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj\") pod \"nova-cell1-1c66-account-create-update-ptpql\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.425748 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17b2e97-00d7-47ba-8b5c-c911a171bd27" path="/var/lib/kubelet/pods/d17b2e97-00d7-47ba-8b5c-c911a171bd27/volumes" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.481918 4828 scope.go:117] "RemoveContainer" containerID="3e94638de42e8008640ce6f10ef811ec789a580c48f4c396f453977edd15f70f" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510104 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl9v7\" (UniqueName: \"kubernetes.io/projected/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-kube-api-access-nl9v7\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510238 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510416 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-scripts\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510674 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510711 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.510846 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.517875 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.518195 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-scripts\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.518714 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.521774 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-config-data\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.535000 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl9v7\" (UniqueName: \"kubernetes.io/projected/f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9-kube-api-access-nl9v7\") pod \"cinder-scheduler-0\" (UID: \"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9\") " pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.616109 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.645936 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.705779 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.836860 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.837001 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.837085 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.837180 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.837413 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmkd\" (UniqueName: \"kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.837773 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0\") pod \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\" (UID: \"b7fa3104-0c77-4894-98bd-ecc7ab46c914\") " Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.856100 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd" (OuterVolumeSpecName: "kube-api-access-5tmkd") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "kube-api-access-5tmkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.934702 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.941063 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tmkd\" (UniqueName: \"kubernetes.io/projected/b7fa3104-0c77-4894-98bd-ecc7ab46c914-kube-api-access-5tmkd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.953790 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.954609 4828 scope.go:117] "RemoveContainer" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" Nov 29 07:25:37 crc kubenswrapper[4828]: E1129 07:25:37.954890 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-654c45c88d-sbsls_openstack(992b3577-23a8-4d07-8826-821fce571ebd)\"" pod="openstack/heat-api-654c45c88d-sbsls" podUID="992b3577-23a8-4d07-8826-821fce571ebd" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.955258 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.980997 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:37 crc kubenswrapper[4828]: I1129 07:25:37.988573 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config" (OuterVolumeSpecName: "config") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.009555 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.018786 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.019001 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7fa3104-0c77-4894-98bd-ecc7ab46c914" (UID: "b7fa3104-0c77-4894-98bd-ecc7ab46c914"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.042736 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.042789 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.042804 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.042813 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.042823 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7fa3104-0c77-4894-98bd-ecc7ab46c914-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.177866 4828 scope.go:117] "RemoveContainer" containerID="fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b" Nov 29 07:25:38 crc kubenswrapper[4828]: E1129 07:25:38.178126 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57744bffdb-m2ffz_openstack(0b0fed58-e5bc-453b-9918-5d1a44dcf00d)\"" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.179475 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" event={"ID":"b7fa3104-0c77-4894-98bd-ecc7ab46c914","Type":"ContainerDied","Data":"f8d914ad42697964d69b3ef5231fb75f85af7d0c1769e8ff4f2cfc98104e07d1"} Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.179497 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-f54k9" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.180255 4828 scope.go:117] "RemoveContainer" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" Nov 29 07:25:38 crc kubenswrapper[4828]: E1129 07:25:38.180517 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-654c45c88d-sbsls_openstack(992b3577-23a8-4d07-8826-821fce571ebd)\"" pod="openstack/heat-api-654c45c88d-sbsls" podUID="992b3577-23a8-4d07-8826-821fce571ebd" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.214690 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.228522 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-f54k9"] Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.398541 4828 scope.go:117] "RemoveContainer" containerID="bbf056d07eb70302ab25f7ae4190b7f5d5a90e65497f461202b1aa5c290f8cd0" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.656495 4828 scope.go:117] "RemoveContainer" containerID="aedb6d85ce604669aa517d5865251356c75afe5dc5f3e805eed2e3b871c99e6a" Nov 29 07:25:38 crc kubenswrapper[4828]: I1129 07:25:38.922226 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.238946 4828 generic.go:334] "Generic (PLEG): container finished" podID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerID="20d68f35c6e76b6d0e3298e6411d717c0194a1d6c78fd8840722ae9a632611ab" exitCode=0 Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.239252 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77df56fcb4-fs2h4" event={"ID":"1227653a-94b6-4867-b24a-3a6e70f62d3b","Type":"ContainerDied","Data":"20d68f35c6e76b6d0e3298e6411d717c0194a1d6c78fd8840722ae9a632611ab"} Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.293119 4828 scope.go:117] "RemoveContainer" containerID="fc6fbc08e0b3fdddfbff63d1dc19611b74461daae7afac88225ba380bb81565d" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.382943 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.555232 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle\") pod \"1227653a-94b6-4867-b24a-3a6e70f62d3b\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.555417 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom\") pod \"1227653a-94b6-4867-b24a-3a6e70f62d3b\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.555577 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9f88\" (UniqueName: \"kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88\") pod \"1227653a-94b6-4867-b24a-3a6e70f62d3b\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.555608 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data\") pod \"1227653a-94b6-4867-b24a-3a6e70f62d3b\" (UID: \"1227653a-94b6-4867-b24a-3a6e70f62d3b\") " Nov 29 07:25:39 crc kubenswrapper[4828]: W1129 07:25:39.570219 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718898d1_9f1d_442b_a581_b388f358f77d.slice/crio-bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17 WatchSource:0}: Error finding container bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17: Status 404 returned error can't find the container with id bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17 Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.570330 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88" (OuterVolumeSpecName: "kube-api-access-b9f88") pod "1227653a-94b6-4867-b24a-3a6e70f62d3b" (UID: "1227653a-94b6-4867-b24a-3a6e70f62d3b"). InnerVolumeSpecName "kube-api-access-b9f88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.570499 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1227653a-94b6-4867-b24a-3a6e70f62d3b" (UID: "1227653a-94b6-4867-b24a-3a6e70f62d3b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.592777 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" path="/var/lib/kubelet/pods/b7fa3104-0c77-4894-98bd-ecc7ab46c914/volumes" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.659502 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.659658 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9f88\" (UniqueName: \"kubernetes.io/projected/1227653a-94b6-4867-b24a-3a6e70f62d3b-kube-api-access-b9f88\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.667664 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1227653a-94b6-4867-b24a-3a6e70f62d3b" (UID: "1227653a-94b6-4867-b24a-3a6e70f62d3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.667731 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data" (OuterVolumeSpecName: "config-data") pod "1227653a-94b6-4867-b24a-3a6e70f62d3b" (UID: "1227653a-94b6-4867-b24a-3a6e70f62d3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.767483 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:39 crc kubenswrapper[4828]: I1129 07:25:39.803077 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227653a-94b6-4867-b24a-3a6e70f62d3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.117080 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.117425 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kqxf5"] Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.117443 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f579788cb-tbwlt"] Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.262330 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqxf5" event={"ID":"718898d1-9f1d-442b-a581-b388f358f77d","Type":"ContainerStarted","Data":"bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17"} Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.270841 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" event={"ID":"1cb551ca-3225-4ed7-9127-04f6a4abe792","Type":"ContainerStarted","Data":"5723081f2c32117943bc1c305f9df8896170d365a1af8275eb49a3bceb2c04b6"} Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.275209 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77df56fcb4-fs2h4" event={"ID":"1227653a-94b6-4867-b24a-3a6e70f62d3b","Type":"ContainerDied","Data":"d0ec688f5d2aec85c400c62dadb3c787bc9d3c60c771eb37c53d4095c069e0ce"} Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.275306 4828 scope.go:117] "RemoveContainer" containerID="20d68f35c6e76b6d0e3298e6411d717c0194a1d6c78fd8840722ae9a632611ab" Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.275775 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77df56fcb4-fs2h4" Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.339522 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.347779 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-77df56fcb4-fs2h4"] Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.592816 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:25:40 crc kubenswrapper[4828]: I1129 07:25:40.625129 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59fbdb74df-c54jw"] Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.028511 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-05c6-account-create-update-v2dls"] Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.076964 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqsbn"] Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.108260 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8p6dr"] Nov 29 07:25:41 crc kubenswrapper[4828]: W1129 07:25:41.111680 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32592977_0620_41a0_9032_84d6dfeba740.slice/crio-d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c WatchSource:0}: Error finding container d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c: Status 404 returned error can't find the container with id d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.136723 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1c66-account-create-update-ptpql"] Nov 29 07:25:41 crc kubenswrapper[4828]: W1129 07:25:41.154395 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48e37f07_ea33_4cb7_abc1_2bd210005773.slice/crio-794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9 WatchSource:0}: Error finding container 794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9: Status 404 returned error can't find the container with id 794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9 Nov 29 07:25:41 crc kubenswrapper[4828]: W1129 07:25:41.168945 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda739da00_650f_46d6_accb_f9e0e93df7af.slice/crio-b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c WatchSource:0}: Error finding container b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c: Status 404 returned error can't find the container with id b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.237392 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-854f-account-create-update-ftz6n"] Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.385531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8p6dr" event={"ID":"48e37f07-ea33-4cb7-abc1-2bd210005773","Type":"ContainerStarted","Data":"794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.399604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerStarted","Data":"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.409425 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqxf5" event={"ID":"718898d1-9f1d-442b-a581-b388f358f77d","Type":"ContainerStarted","Data":"feb4cfe49fdc17e77b9fccf68d8e4f5077e633bfb65cd807125b7913e4f5b568"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.433183 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-kqxf5" podStartSLOduration=5.433164274 podStartE2EDuration="5.433164274s" podCreationTimestamp="2025-11-29 07:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:41.432801514 +0000 UTC m=+1481.054877582" watchObservedRunningTime="2025-11-29 07:25:41.433164274 +0000 UTC m=+1481.055240332" Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.462360 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" path="/var/lib/kubelet/pods/1227653a-94b6-4867-b24a-3a6e70f62d3b/volumes" Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.462994 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9","Type":"ContainerStarted","Data":"e040863e686484429f779131ea3aa6c58efbef0f204d78f96a38c005e6400105"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.567119 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerStarted","Data":"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.647110 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59fbdb74df-c54jw" event={"ID":"930ded64-8acc-4fc6-b729-034214fa160b","Type":"ContainerStarted","Data":"e8ba20fb06814103f0489f259331c413905116c37a7eab4be46c16008a46717f"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.685945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" event={"ID":"1cb551ca-3225-4ed7-9127-04f6a4abe792","Type":"ContainerStarted","Data":"d373a4d2aaa8c9130fc77bdf83732205ce4fdb81c437566fe161b1691423fe0d"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.689474 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.691832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05c6-account-create-update-v2dls" event={"ID":"32592977-0620-41a0-9032-84d6dfeba740","Type":"ContainerStarted","Data":"d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.700694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqsbn" event={"ID":"96d052ca-6f4c-4aa1-a411-da901c59e32e","Type":"ContainerStarted","Data":"a7b658c7877c83bea5d96e6203aa870f1612092a4ef299ded4a5c62fb499df98"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.712468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" event={"ID":"bd70a089-5326-4b8b-8090-f22b19860d0e","Type":"ContainerStarted","Data":"a6b8181bd5d1b1511e598d2936e385c906c4baf0f7aa66d537fd5543d5c9c90f"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.735565 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" event={"ID":"a739da00-650f-46d6-accb-f9e0e93df7af","Type":"ContainerStarted","Data":"b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c"} Nov 29 07:25:41 crc kubenswrapper[4828]: I1129 07:25:41.809360 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" podStartSLOduration=5.8093389680000005 podStartE2EDuration="5.809338968s" podCreationTimestamp="2025-11-29 07:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:41.776614737 +0000 UTC m=+1481.398690805" watchObservedRunningTime="2025-11-29 07:25:41.809338968 +0000 UTC m=+1481.431415016" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.749551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59fbdb74df-c54jw" event={"ID":"930ded64-8acc-4fc6-b729-034214fa160b","Type":"ContainerStarted","Data":"d96838fa80db96bb70e4979cd0c102548e525141d497c270d428592bb5f1ecc5"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.750345 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.755704 4828 generic.go:334] "Generic (PLEG): container finished" podID="96d052ca-6f4c-4aa1-a411-da901c59e32e" containerID="f58dc5a9733beeec6aab550f4750fd641361623783e6a529dcc62c0b17def194" exitCode=0 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.755848 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqsbn" event={"ID":"96d052ca-6f4c-4aa1-a411-da901c59e32e","Type":"ContainerDied","Data":"f58dc5a9733beeec6aab550f4750fd641361623783e6a529dcc62c0b17def194"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.759078 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" event={"ID":"bd70a089-5326-4b8b-8090-f22b19860d0e","Type":"ContainerStarted","Data":"0290d2dd34604ea94b677a5864222196c85f979ebf71c348dbd1b511e8e0f5e2"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.764882 4828 generic.go:334] "Generic (PLEG): container finished" podID="48e37f07-ea33-4cb7-abc1-2bd210005773" containerID="6822656aca736aee2151b4eb8e77d3ac2331aa9d8ec05f71cb91e53dfd0ca000" exitCode=0 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.764992 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8p6dr" event={"ID":"48e37f07-ea33-4cb7-abc1-2bd210005773","Type":"ContainerDied","Data":"6822656aca736aee2151b4eb8e77d3ac2331aa9d8ec05f71cb91e53dfd0ca000"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.769701 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05c6-account-create-update-v2dls" event={"ID":"32592977-0620-41a0-9032-84d6dfeba740","Type":"ContainerStarted","Data":"8cc51868bd398e20ca767b64b4c7ef917e6956bae6af6d64efd8f699f594afe2"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.777891 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59fbdb74df-c54jw" podStartSLOduration=6.777862502 podStartE2EDuration="6.777862502s" podCreationTimestamp="2025-11-29 07:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:42.774119655 +0000 UTC m=+1482.396195743" watchObservedRunningTime="2025-11-29 07:25:42.777862502 +0000 UTC m=+1482.399938570" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.793841 4828 generic.go:334] "Generic (PLEG): container finished" podID="718898d1-9f1d-442b-a581-b388f358f77d" containerID="feb4cfe49fdc17e77b9fccf68d8e4f5077e633bfb65cd807125b7913e4f5b568" exitCode=0 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.793957 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqxf5" event={"ID":"718898d1-9f1d-442b-a581-b388f358f77d","Type":"ContainerDied","Data":"feb4cfe49fdc17e77b9fccf68d8e4f5077e633bfb65cd807125b7913e4f5b568"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.808451 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9","Type":"ContainerStarted","Data":"d76ba4c4ebb719ea5e0ebe9536b7f8366edcba74003334a8fc4c0449125b8b44"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.819122 4828 generic.go:334] "Generic (PLEG): container finished" podID="a739da00-650f-46d6-accb-f9e0e93df7af" containerID="449670b16f9313737e61efba55064cc5fac4d157a3d05d0875deccba092c45ef" exitCode=0 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.819185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" event={"ID":"a739da00-650f-46d6-accb-f9e0e93df7af","Type":"ContainerDied","Data":"449670b16f9313737e61efba55064cc5fac4d157a3d05d0875deccba092c45ef"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.823592 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerStarted","Data":"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.823755 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-central-agent" containerID="cri-o://6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826" gracePeriod=30 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.824001 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.824046 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="proxy-httpd" containerID="cri-o://7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18" gracePeriod=30 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.824091 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="sg-core" containerID="cri-o://ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca" gracePeriod=30 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.824127 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-notification-agent" containerID="cri-o://85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a" gracePeriod=30 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.825952 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-05c6-account-create-update-v2dls" podStartSLOduration=6.825920596 podStartE2EDuration="6.825920596s" podCreationTimestamp="2025-11-29 07:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:42.816205287 +0000 UTC m=+1482.438281345" watchObservedRunningTime="2025-11-29 07:25:42.825920596 +0000 UTC m=+1482.447996654" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.831047 4828 generic.go:334] "Generic (PLEG): container finished" podID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerID="22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf" exitCode=0 Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.833365 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerDied","Data":"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf"} Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.836993 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" podStartSLOduration=6.83697557 podStartE2EDuration="6.83697557s" podCreationTimestamp="2025-11-29 07:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:42.830511774 +0000 UTC m=+1482.452587832" watchObservedRunningTime="2025-11-29 07:25:42.83697557 +0000 UTC m=+1482.459051628" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.885655 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.164:8000/healthcheck\": read tcp 10.217.0.2:60520->10.217.0.164:8000: read: connection reset by peer" Nov 29 07:25:42 crc kubenswrapper[4828]: I1129 07:25:42.953525 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9714312290000002 podStartE2EDuration="20.953505444s" podCreationTimestamp="2025-11-29 07:25:22 +0000 UTC" firstStartedPulling="2025-11-29 07:25:23.765106557 +0000 UTC m=+1463.387182615" lastFinishedPulling="2025-11-29 07:25:41.747180772 +0000 UTC m=+1481.369256830" observedRunningTime="2025-11-29 07:25:42.945132899 +0000 UTC m=+1482.567208967" watchObservedRunningTime="2025-11-29 07:25:42.953505444 +0000 UTC m=+1482.575581502" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.403255 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.511867 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2hk5\" (UniqueName: \"kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5\") pod \"76228783-3735-4393-af2d-cd8ace3bd0aa\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.511985 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle\") pod \"76228783-3735-4393-af2d-cd8ace3bd0aa\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.512047 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom\") pod \"76228783-3735-4393-af2d-cd8ace3bd0aa\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.512088 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data\") pod \"76228783-3735-4393-af2d-cd8ace3bd0aa\" (UID: \"76228783-3735-4393-af2d-cd8ace3bd0aa\") " Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.520740 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5" (OuterVolumeSpecName: "kube-api-access-l2hk5") pod "76228783-3735-4393-af2d-cd8ace3bd0aa" (UID: "76228783-3735-4393-af2d-cd8ace3bd0aa"). InnerVolumeSpecName "kube-api-access-l2hk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.554305 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76228783-3735-4393-af2d-cd8ace3bd0aa" (UID: "76228783-3735-4393-af2d-cd8ace3bd0aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.564881 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76228783-3735-4393-af2d-cd8ace3bd0aa" (UID: "76228783-3735-4393-af2d-cd8ace3bd0aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.594892 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data" (OuterVolumeSpecName: "config-data") pod "76228783-3735-4393-af2d-cd8ace3bd0aa" (UID: "76228783-3735-4393-af2d-cd8ace3bd0aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.614649 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.614901 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.614986 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76228783-3735-4393-af2d-cd8ace3bd0aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.615061 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2hk5\" (UniqueName: \"kubernetes.io/projected/76228783-3735-4393-af2d-cd8ace3bd0aa-kube-api-access-l2hk5\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.845128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9","Type":"ContainerStarted","Data":"70ddfad1fbc580b0ea4440623ca3507e9b8d86e4801b230c97b221c12a025590"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.847073 4828 generic.go:334] "Generic (PLEG): container finished" podID="bd70a089-5326-4b8b-8090-f22b19860d0e" containerID="0290d2dd34604ea94b677a5864222196c85f979ebf71c348dbd1b511e8e0f5e2" exitCode=0 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.847158 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" event={"ID":"bd70a089-5326-4b8b-8090-f22b19860d0e","Type":"ContainerDied","Data":"0290d2dd34604ea94b677a5864222196c85f979ebf71c348dbd1b511e8e0f5e2"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.849059 4828 generic.go:334] "Generic (PLEG): container finished" podID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerID="89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1" exitCode=0 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.849127 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" event={"ID":"76228783-3735-4393-af2d-cd8ace3bd0aa","Type":"ContainerDied","Data":"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.849140 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.849155 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4d784bd9-s5pdk" event={"ID":"76228783-3735-4393-af2d-cd8ace3bd0aa","Type":"ContainerDied","Data":"1b6f5fcb2815aa759d4d03a231e07efca676a0c09aa4f3a0cc474eb8b9f83826"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.849172 4828 scope.go:117] "RemoveContainer" containerID="89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.872030 4828 generic.go:334] "Generic (PLEG): container finished" podID="32592977-0620-41a0-9032-84d6dfeba740" containerID="8cc51868bd398e20ca767b64b4c7ef917e6956bae6af6d64efd8f699f594afe2" exitCode=0 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.872193 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05c6-account-create-update-v2dls" event={"ID":"32592977-0620-41a0-9032-84d6dfeba740","Type":"ContainerDied","Data":"8cc51868bd398e20ca767b64b4c7ef917e6956bae6af6d64efd8f699f594afe2"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.880633 4828 generic.go:334] "Generic (PLEG): container finished" podID="6e543180-ec99-4502-9722-5a819aad79d7" containerID="7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18" exitCode=0 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.880811 4828 generic.go:334] "Generic (PLEG): container finished" podID="6e543180-ec99-4502-9722-5a819aad79d7" containerID="ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca" exitCode=2 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.880827 4828 generic.go:334] "Generic (PLEG): container finished" podID="6e543180-ec99-4502-9722-5a819aad79d7" containerID="85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a" exitCode=0 Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.881175 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerDied","Data":"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.881217 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerDied","Data":"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.881233 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerDied","Data":"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a"} Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.883491 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.883456247 podStartE2EDuration="6.883456247s" podCreationTimestamp="2025-11-29 07:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:43.869385256 +0000 UTC m=+1483.491461324" watchObservedRunningTime="2025-11-29 07:25:43.883456247 +0000 UTC m=+1483.505532305" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.922037 4828 scope.go:117] "RemoveContainer" containerID="89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1" Nov 29 07:25:43 crc kubenswrapper[4828]: E1129 07:25:43.927957 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1\": container with ID starting with 89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1 not found: ID does not exist" containerID="89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.928020 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1"} err="failed to get container status \"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1\": rpc error: code = NotFound desc = could not find container \"89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1\": container with ID starting with 89e752c81f3d43be2e681eb602f21fa5e468e7cc86ada9f17f3a49cf46df7fc1 not found: ID does not exist" Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.950143 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:43 crc kubenswrapper[4828]: I1129 07:25:43.975693 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7c4d784bd9-s5pdk"] Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.379426 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.559899 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts\") pod \"48e37f07-ea33-4cb7-abc1-2bd210005773\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.560020 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jcrl\" (UniqueName: \"kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl\") pod \"48e37f07-ea33-4cb7-abc1-2bd210005773\" (UID: \"48e37f07-ea33-4cb7-abc1-2bd210005773\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.561943 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48e37f07-ea33-4cb7-abc1-2bd210005773" (UID: "48e37f07-ea33-4cb7-abc1-2bd210005773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.562302 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e37f07-ea33-4cb7-abc1-2bd210005773-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.568703 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl" (OuterVolumeSpecName: "kube-api-access-5jcrl") pod "48e37f07-ea33-4cb7-abc1-2bd210005773" (UID: "48e37f07-ea33-4cb7-abc1-2bd210005773"). InnerVolumeSpecName "kube-api-access-5jcrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.646929 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.649027 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.652190 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.664394 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jcrl\" (UniqueName: \"kubernetes.io/projected/48e37f07-ea33-4cb7-abc1-2bd210005773-kube-api-access-5jcrl\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765111 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wlrj\" (UniqueName: \"kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj\") pod \"a739da00-650f-46d6-accb-f9e0e93df7af\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765517 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtjq\" (UniqueName: \"kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq\") pod \"96d052ca-6f4c-4aa1-a411-da901c59e32e\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765569 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts\") pod \"718898d1-9f1d-442b-a581-b388f358f77d\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765607 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts\") pod \"96d052ca-6f4c-4aa1-a411-da901c59e32e\" (UID: \"96d052ca-6f4c-4aa1-a411-da901c59e32e\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765672 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts\") pod \"a739da00-650f-46d6-accb-f9e0e93df7af\" (UID: \"a739da00-650f-46d6-accb-f9e0e93df7af\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.765730 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxwt\" (UniqueName: \"kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt\") pod \"718898d1-9f1d-442b-a581-b388f358f77d\" (UID: \"718898d1-9f1d-442b-a581-b388f358f77d\") " Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.767868 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "718898d1-9f1d-442b-a581-b388f358f77d" (UID: "718898d1-9f1d-442b-a581-b388f358f77d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.768164 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a739da00-650f-46d6-accb-f9e0e93df7af" (UID: "a739da00-650f-46d6-accb-f9e0e93df7af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.768455 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96d052ca-6f4c-4aa1-a411-da901c59e32e" (UID: "96d052ca-6f4c-4aa1-a411-da901c59e32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.772683 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj" (OuterVolumeSpecName: "kube-api-access-8wlrj") pod "a739da00-650f-46d6-accb-f9e0e93df7af" (UID: "a739da00-650f-46d6-accb-f9e0e93df7af"). InnerVolumeSpecName "kube-api-access-8wlrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.772864 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt" (OuterVolumeSpecName: "kube-api-access-prxwt") pod "718898d1-9f1d-442b-a581-b388f358f77d" (UID: "718898d1-9f1d-442b-a581-b388f358f77d"). InnerVolumeSpecName "kube-api-access-prxwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.774782 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq" (OuterVolumeSpecName: "kube-api-access-dmtjq") pod "96d052ca-6f4c-4aa1-a411-da901c59e32e" (UID: "96d052ca-6f4c-4aa1-a411-da901c59e32e"). InnerVolumeSpecName "kube-api-access-dmtjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868085 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a739da00-650f-46d6-accb-f9e0e93df7af-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868138 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxwt\" (UniqueName: \"kubernetes.io/projected/718898d1-9f1d-442b-a581-b388f358f77d-kube-api-access-prxwt\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868154 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wlrj\" (UniqueName: \"kubernetes.io/projected/a739da00-650f-46d6-accb-f9e0e93df7af-kube-api-access-8wlrj\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868166 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtjq\" (UniqueName: \"kubernetes.io/projected/96d052ca-6f4c-4aa1-a411-da901c59e32e-kube-api-access-dmtjq\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868179 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/718898d1-9f1d-442b-a581-b388f358f77d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.868192 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d052ca-6f4c-4aa1-a411-da901c59e32e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.893700 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8p6dr" event={"ID":"48e37f07-ea33-4cb7-abc1-2bd210005773","Type":"ContainerDied","Data":"794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9"} Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.893751 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794bdf38129804151e9a2a9012d83d1994617be2b0ec18a2e1162102c74ef3a9" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.893824 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8p6dr" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.905301 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" event={"ID":"a739da00-650f-46d6-accb-f9e0e93df7af","Type":"ContainerDied","Data":"b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c"} Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.905376 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b167027ec7538db6df9039dd9b0eec0ebf64e26b1eafd47e1f9af6e3743a8a3c" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.905325 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1c66-account-create-update-ptpql" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.907690 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqxf5" event={"ID":"718898d1-9f1d-442b-a581-b388f358f77d","Type":"ContainerDied","Data":"bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17"} Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.907736 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafc49439528d2eef3527a5aada5149850d8258ecfc3a2a451dfae9fb1759b17" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.907810 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqxf5" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.913124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerStarted","Data":"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053"} Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.915184 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqsbn" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.915178 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqsbn" event={"ID":"96d052ca-6f4c-4aa1-a411-da901c59e32e","Type":"ContainerDied","Data":"a7b658c7877c83bea5d96e6203aa870f1612092a4ef299ded4a5c62fb499df98"} Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.915314 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b658c7877c83bea5d96e6203aa870f1612092a4ef299ded4a5c62fb499df98" Nov 29 07:25:44 crc kubenswrapper[4828]: I1129 07:25:44.963109 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vrls" podStartSLOduration=4.49763779 podStartE2EDuration="15.962657535s" podCreationTimestamp="2025-11-29 07:25:29 +0000 UTC" firstStartedPulling="2025-11-29 07:25:31.94037851 +0000 UTC m=+1471.562454568" lastFinishedPulling="2025-11-29 07:25:43.405398265 +0000 UTC m=+1483.027474313" observedRunningTime="2025-11-29 07:25:44.952982846 +0000 UTC m=+1484.575058904" watchObservedRunningTime="2025-11-29 07:25:44.962657535 +0000 UTC m=+1484.584733623" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.382505 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.445874 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" path="/var/lib/kubelet/pods/76228783-3735-4393-af2d-cd8ace3bd0aa/volumes" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.448212 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.485189 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts\") pod \"32592977-0620-41a0-9032-84d6dfeba740\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.485328 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2klxt\" (UniqueName: \"kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt\") pod \"32592977-0620-41a0-9032-84d6dfeba740\" (UID: \"32592977-0620-41a0-9032-84d6dfeba740\") " Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.486707 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32592977-0620-41a0-9032-84d6dfeba740" (UID: "32592977-0620-41a0-9032-84d6dfeba740"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.501558 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt" (OuterVolumeSpecName: "kube-api-access-2klxt") pod "32592977-0620-41a0-9032-84d6dfeba740" (UID: "32592977-0620-41a0-9032-84d6dfeba740"). InnerVolumeSpecName "kube-api-access-2klxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.587110 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvztb\" (UniqueName: \"kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb\") pod \"bd70a089-5326-4b8b-8090-f22b19860d0e\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.587200 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts\") pod \"bd70a089-5326-4b8b-8090-f22b19860d0e\" (UID: \"bd70a089-5326-4b8b-8090-f22b19860d0e\") " Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.587907 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd70a089-5326-4b8b-8090-f22b19860d0e" (UID: "bd70a089-5326-4b8b-8090-f22b19860d0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.588505 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70a089-5326-4b8b-8090-f22b19860d0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.588527 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32592977-0620-41a0-9032-84d6dfeba740-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.588537 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2klxt\" (UniqueName: \"kubernetes.io/projected/32592977-0620-41a0-9032-84d6dfeba740-kube-api-access-2klxt\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.594124 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb" (OuterVolumeSpecName: "kube-api-access-kvztb") pod "bd70a089-5326-4b8b-8090-f22b19860d0e" (UID: "bd70a089-5326-4b8b-8090-f22b19860d0e"). InnerVolumeSpecName "kube-api-access-kvztb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.690197 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvztb\" (UniqueName: \"kubernetes.io/projected/bd70a089-5326-4b8b-8090-f22b19860d0e-kube-api-access-kvztb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.930713 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" event={"ID":"bd70a089-5326-4b8b-8090-f22b19860d0e","Type":"ContainerDied","Data":"a6b8181bd5d1b1511e598d2936e385c906c4baf0f7aa66d537fd5543d5c9c90f"} Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.930761 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b8181bd5d1b1511e598d2936e385c906c4baf0f7aa66d537fd5543d5c9c90f" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.930806 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-854f-account-create-update-ftz6n" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.932972 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05c6-account-create-update-v2dls" event={"ID":"32592977-0620-41a0-9032-84d6dfeba740","Type":"ContainerDied","Data":"d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c"} Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.933002 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05c6-account-create-update-v2dls" Nov 29 07:25:45 crc kubenswrapper[4828]: I1129 07:25:45.933018 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d4874e516feecb0eb8bebd03b20a3a1cedb79d225eb57702d48e2923a5851c" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.551896 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.709674 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.709761 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.709804 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.709896 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.709997 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.710125 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvfhl\" (UniqueName: \"kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.710154 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd\") pod \"6e543180-ec99-4502-9722-5a819aad79d7\" (UID: \"6e543180-ec99-4502-9722-5a819aad79d7\") " Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.711134 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.711386 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.711698 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.711720 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e543180-ec99-4502-9722-5a819aad79d7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.726218 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts" (OuterVolumeSpecName: "scripts") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.726410 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl" (OuterVolumeSpecName: "kube-api-access-jvfhl") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "kube-api-access-jvfhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.744161 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.814008 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.814257 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.814388 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvfhl\" (UniqueName: \"kubernetes.io/projected/6e543180-ec99-4502-9722-5a819aad79d7-kube-api-access-jvfhl\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.846605 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.865322 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data" (OuterVolumeSpecName: "config-data") pod "6e543180-ec99-4502-9722-5a819aad79d7" (UID: "6e543180-ec99-4502-9722-5a819aad79d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.916301 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.916338 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e543180-ec99-4502-9722-5a819aad79d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.946465 4828 generic.go:334] "Generic (PLEG): container finished" podID="6e543180-ec99-4502-9722-5a819aad79d7" containerID="6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826" exitCode=0 Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.946512 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerDied","Data":"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826"} Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.946540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e543180-ec99-4502-9722-5a819aad79d7","Type":"ContainerDied","Data":"09d85362eee5408ae14fa186d49418d792192b1fa33b4e9c35ba3935217edc8a"} Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.946558 4828 scope.go:117] "RemoveContainer" containerID="7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.946713 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:46 crc kubenswrapper[4828]: I1129 07:25:46.991191 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.001817 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.006618 4828 scope.go:117] "RemoveContainer" containerID="ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.029726 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030129 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="sg-core" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030146 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="sg-core" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030161 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718898d1-9f1d-442b-a581-b388f358f77d" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030169 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="718898d1-9f1d-442b-a581-b388f358f77d" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030176 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e37f07-ea33-4cb7-abc1-2bd210005773" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030182 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e37f07-ea33-4cb7-abc1-2bd210005773" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030194 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-notification-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030200 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-notification-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030209 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a739da00-650f-46d6-accb-f9e0e93df7af" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030215 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a739da00-650f-46d6-accb-f9e0e93df7af" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030228 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d052ca-6f4c-4aa1-a411-da901c59e32e" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030234 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d052ca-6f4c-4aa1-a411-da901c59e32e" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030245 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="init" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030251 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="init" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030261 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="proxy-httpd" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030280 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="proxy-httpd" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030291 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="dnsmasq-dns" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030297 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="dnsmasq-dns" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030316 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030322 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030329 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32592977-0620-41a0-9032-84d6dfeba740" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030335 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="32592977-0620-41a0-9032-84d6dfeba740" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030348 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030354 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030369 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd70a089-5326-4b8b-8090-f22b19860d0e" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030374 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd70a089-5326-4b8b-8090-f22b19860d0e" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.030383 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-central-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030389 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-central-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030560 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fa3104-0c77-4894-98bd-ecc7ab46c914" containerName="dnsmasq-dns" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030572 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="76228783-3735-4393-af2d-cd8ace3bd0aa" containerName="heat-cfnapi" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030581 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="sg-core" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030590 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d052ca-6f4c-4aa1-a411-da901c59e32e" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030598 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-central-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030609 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030619 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="proxy-httpd" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030627 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e37f07-ea33-4cb7-abc1-2bd210005773" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030636 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd70a089-5326-4b8b-8090-f22b19860d0e" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030646 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a739da00-650f-46d6-accb-f9e0e93df7af" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030657 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="718898d1-9f1d-442b-a581-b388f358f77d" containerName="mariadb-database-create" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030665 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="32592977-0620-41a0-9032-84d6dfeba740" containerName="mariadb-account-create-update" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.030673 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e543180-ec99-4502-9722-5a819aad79d7" containerName="ceilometer-notification-agent" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.036237 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.042737 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.043162 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.079882 4828 scope.go:117] "RemoveContainer" containerID="85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.080371 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.116850 4828 scope.go:117] "RemoveContainer" containerID="6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120525 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120593 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9skq\" (UniqueName: \"kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120636 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120710 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120738 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120764 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.120789 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222621 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222686 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9skq\" (UniqueName: \"kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222725 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222815 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222833 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.222854 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.223945 4828 scope.go:117] "RemoveContainer" containerID="7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.224471 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.224836 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18\": container with ID starting with 7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18 not found: ID does not exist" containerID="7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.224901 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18"} err="failed to get container status \"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18\": rpc error: code = NotFound desc = could not find container \"7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18\": container with ID starting with 7e1230f0b4e01b5b11710706737ca7b3e7c4da808d46cc1393a1eb34d86d7b18 not found: ID does not exist" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.224936 4828 scope.go:117] "RemoveContainer" containerID="ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.225058 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.228210 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca\": container with ID starting with ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca not found: ID does not exist" containerID="ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.229525 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.229752 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.228636 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca"} err="failed to get container status \"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca\": rpc error: code = NotFound desc = could not find container \"ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca\": container with ID starting with ca27a78a6de907c8a0879c92db46704509198dba7669a940743c485e4869b1ca not found: ID does not exist" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.235984 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.236520 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.244204 4828 scope.go:117] "RemoveContainer" containerID="85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.245093 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a\": container with ID starting with 85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a not found: ID does not exist" containerID="85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.245149 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a"} err="failed to get container status \"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a\": rpc error: code = NotFound desc = could not find container \"85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a\": container with ID starting with 85a70432e33cdb3f8f595aca0db5acfe7736217fae1f930d49ddfc20c7d9e74a not found: ID does not exist" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.245188 4828 scope.go:117] "RemoveContainer" containerID="6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826" Nov 29 07:25:47 crc kubenswrapper[4828]: E1129 07:25:47.245994 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826\": container with ID starting with 6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826 not found: ID does not exist" containerID="6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.246026 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826"} err="failed to get container status \"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826\": rpc error: code = NotFound desc = could not find container \"6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826\": container with ID starting with 6d71a8726daa13c706f787d4a521ef1e76ac9e684f253046fd92028a962ba826 not found: ID does not exist" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.247989 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9skq\" (UniqueName: \"kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq\") pod \"ceilometer-0\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.347477 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdknn"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.351233 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.353815 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.353974 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.354295 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfdkq" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.400582 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.407897 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdknn"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.430230 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e543180-ec99-4502-9722-5a819aad79d7" path="/var/lib/kubelet/pods/6e543180-ec99-4502-9722-5a819aad79d7/volumes" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.532660 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.533169 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.533354 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jjh\" (UniqueName: \"kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.533483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.640411 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.640504 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.640562 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4jjh\" (UniqueName: \"kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.640603 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.645713 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.647968 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.653804 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.653835 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.663496 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4jjh\" (UniqueName: \"kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh\") pod \"nova-cell0-conductor-db-sync-wdknn\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.672696 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.958698 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-fd957fd8c-nfdrx" Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.970825 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.999213 4828 generic.go:334] "Generic (PLEG): container finished" podID="70dc014d-201b-448d-84ba-2c89e7c10855" containerID="e1e506485c1ea7a4452f3107adefc8e0fc18d9f429760a73eeea4e4d544c8455" exitCode=0 Nov 29 07:25:47 crc kubenswrapper[4828]: I1129 07:25:47.999360 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mhgs8" event={"ID":"70dc014d-201b-448d-84ba-2c89e7c10855","Type":"ContainerDied","Data":"e1e506485c1ea7a4452f3107adefc8e0fc18d9f429760a73eeea4e4d544c8455"} Nov 29 07:25:48 crc kubenswrapper[4828]: I1129 07:25:48.053616 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdknn"] Nov 29 07:25:48 crc kubenswrapper[4828]: I1129 07:25:48.113722 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:25:48 crc kubenswrapper[4828]: I1129 07:25:48.137938 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:25:48 crc kubenswrapper[4828]: I1129 07:25:48.138202 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-999b4d64b-9brmm" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" containerID="cri-o://95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" gracePeriod=60 Nov 29 07:25:48 crc kubenswrapper[4828]: E1129 07:25:48.869586 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:48 crc kubenswrapper[4828]: E1129 07:25:48.882858 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:48 crc kubenswrapper[4828]: E1129 07:25:48.888020 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:48 crc kubenswrapper[4828]: E1129 07:25:48.888118 4828 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-999b4d64b-9brmm" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.021517 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerStarted","Data":"014561d0493301fe6f4ac3f4764457718bd8e8981841a2a22aac8d312a3cc43f"} Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.023147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdknn" event={"ID":"33043721-20af-4165-8035-2a4fbe295eb3","Type":"ContainerStarted","Data":"4e3473174c6b144290a3bcc83ff71d938460898b78dc8d303b8edf657db87e81"} Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.300420 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-77df56fcb4-fs2h4" podUID="1227653a-94b6-4867-b24a-3a6e70f62d3b" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.165:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.435591 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59fbdb74df-c54jw" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.563328 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.608570 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.620238 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7f579788cb-tbwlt" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.649096 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.651635 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.689843 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle\") pod \"70dc014d-201b-448d-84ba-2c89e7c10855\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.690194 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlsgz\" (UniqueName: \"kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz\") pod \"70dc014d-201b-448d-84ba-2c89e7c10855\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.690322 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config\") pod \"70dc014d-201b-448d-84ba-2c89e7c10855\" (UID: \"70dc014d-201b-448d-84ba-2c89e7c10855\") " Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.699092 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz" (OuterVolumeSpecName: "kube-api-access-qlsgz") pod "70dc014d-201b-448d-84ba-2c89e7c10855" (UID: "70dc014d-201b-448d-84ba-2c89e7c10855"). InnerVolumeSpecName "kube-api-access-qlsgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.719334 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.782039 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70dc014d-201b-448d-84ba-2c89e7c10855" (UID: "70dc014d-201b-448d-84ba-2c89e7c10855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.793572 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.793603 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlsgz\" (UniqueName: \"kubernetes.io/projected/70dc014d-201b-448d-84ba-2c89e7c10855-kube-api-access-qlsgz\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.798295 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config" (OuterVolumeSpecName: "config") pod "70dc014d-201b-448d-84ba-2c89e7c10855" (UID: "70dc014d-201b-448d-84ba-2c89e7c10855"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:49 crc kubenswrapper[4828]: I1129 07:25:49.895835 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/70dc014d-201b-448d-84ba-2c89e7c10855-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.029467 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.057050 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mhgs8" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.059357 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mhgs8" event={"ID":"70dc014d-201b-448d-84ba-2c89e7c10855","Type":"ContainerDied","Data":"ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9"} Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.059398 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea853d42e1d81e749d9473f07008c501b3441a71889a8416ca79b3a29b5ac4e9" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.072182 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-654c45c88d-sbsls" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.072966 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-654c45c88d-sbsls" event={"ID":"992b3577-23a8-4d07-8826-821fce571ebd","Type":"ContainerDied","Data":"565ae8646a20ead5bd7654650ef3e0deb9d0c379e6d38eb3ed8fa66336bb5dd3"} Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.073053 4828 scope.go:117] "RemoveContainer" containerID="8ad2111ea3b27ff55663d697edaa9933e2778cbdb6ff0bfdc1c27c25dadb64e9" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.098358 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle\") pod \"992b3577-23a8-4d07-8826-821fce571ebd\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.098468 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom\") pod \"992b3577-23a8-4d07-8826-821fce571ebd\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.098557 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xvtx\" (UniqueName: \"kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx\") pod \"992b3577-23a8-4d07-8826-821fce571ebd\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.098602 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data\") pod \"992b3577-23a8-4d07-8826-821fce571ebd\" (UID: \"992b3577-23a8-4d07-8826-821fce571ebd\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.138437 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "992b3577-23a8-4d07-8826-821fce571ebd" (UID: "992b3577-23a8-4d07-8826-821fce571ebd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.204995 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.267824 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:25:50 crc kubenswrapper[4828]: E1129 07:25:50.268357 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.268382 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: E1129 07:25:50.268414 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dc014d-201b-448d-84ba-2c89e7c10855" containerName="neutron-db-sync" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.268421 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dc014d-201b-448d-84ba-2c89e7c10855" containerName="neutron-db-sync" Nov 29 07:25:50 crc kubenswrapper[4828]: E1129 07:25:50.268434 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.268441 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.268739 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.268755 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="70dc014d-201b-448d-84ba-2c89e7c10855" containerName="neutron-db-sync" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.269782 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="992b3577-23a8-4d07-8826-821fce571ebd" containerName="heat-api" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.270647 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.273613 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.277223 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.302892 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx" (OuterVolumeSpecName: "kube-api-access-2xvtx") pod "992b3577-23a8-4d07-8826-821fce571ebd" (UID: "992b3577-23a8-4d07-8826-821fce571ebd"). InnerVolumeSpecName "kube-api-access-2xvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.306813 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xvtx\" (UniqueName: \"kubernetes.io/projected/992b3577-23a8-4d07-8826-821fce571ebd-kube-api-access-2xvtx\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.370922 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "992b3577-23a8-4d07-8826-821fce571ebd" (UID: "992b3577-23a8-4d07-8826-821fce571ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.391525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data" (OuterVolumeSpecName: "config-data") pod "992b3577-23a8-4d07-8826-821fce571ebd" (UID: "992b3577-23a8-4d07-8826-821fce571ebd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.393593 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:25:50 crc kubenswrapper[4828]: E1129 07:25:50.394069 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.394089 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.394292 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: E1129 07:25:50.394516 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.394535 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.394737 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" containerName="heat-cfnapi" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.395494 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.406199 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.406207 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2hc7w" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.406591 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.406677 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410156 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data\") pod \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410592 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9vbx\" (UniqueName: \"kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx\") pod \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410620 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom\") pod \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410672 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle\") pod \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\" (UID: \"0b0fed58-e5bc-453b-9918-5d1a44dcf00d\") " Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410906 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.410978 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411006 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411071 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cst\" (UniqueName: \"kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411128 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411189 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411246 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.411259 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992b3577-23a8-4d07-8826-821fce571ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.421672 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0b0fed58-e5bc-453b-9918-5d1a44dcf00d" (UID: "0b0fed58-e5bc-453b-9918-5d1a44dcf00d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.421886 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx" (OuterVolumeSpecName: "kube-api-access-p9vbx") pod "0b0fed58-e5bc-453b-9918-5d1a44dcf00d" (UID: "0b0fed58-e5bc-453b-9918-5d1a44dcf00d"). InnerVolumeSpecName "kube-api-access-p9vbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.434072 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.470158 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b0fed58-e5bc-453b-9918-5d1a44dcf00d" (UID: "0b0fed58-e5bc-453b-9918-5d1a44dcf00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.484384 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data" (OuterVolumeSpecName: "config-data") pod "0b0fed58-e5bc-453b-9918-5d1a44dcf00d" (UID: "0b0fed58-e5bc-453b-9918-5d1a44dcf00d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.513865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.513925 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.513986 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cst\" (UniqueName: \"kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514021 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514057 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514081 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514098 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzn9\" (UniqueName: \"kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514115 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514145 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514167 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514205 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514396 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514411 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9vbx\" (UniqueName: \"kubernetes.io/projected/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-kube-api-access-p9vbx\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514421 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.514430 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0fed58-e5bc-453b-9918-5d1a44dcf00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.517059 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.517295 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.517692 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.518173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.539454 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cst\" (UniqueName: \"kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.554783 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-klssl\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.591168 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.615950 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.616163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.616217 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.616239 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzn9\" (UniqueName: \"kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.616290 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.622618 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.623519 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.627174 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.654456 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzn9\" (UniqueName: \"kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.657542 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config\") pod \"neutron-578795589b-kkwlj\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.740225 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.745548 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vrls" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" probeResult="failure" output=< Nov 29 07:25:50 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:25:50 crc kubenswrapper[4828]: > Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.802025 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:50 crc kubenswrapper[4828]: I1129 07:25:50.835172 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-654c45c88d-sbsls"] Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.133581 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerStarted","Data":"90a72fcd49d483e28014d981981a6d0f2d26d67b6f5b5957e4b234f8f6a88506"} Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.138109 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" event={"ID":"0b0fed58-e5bc-453b-9918-5d1a44dcf00d","Type":"ContainerDied","Data":"ccef3b9f9be59e65535a54b0046334111e6f68e0bae521ac8724ba0dba28274f"} Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.138154 4828 scope.go:117] "RemoveContainer" containerID="fdeffc2c23a7074a057ed0f257c041712074e4be257b62ef46fddd3f26de560b" Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.138280 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57744bffdb-m2ffz" Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.189277 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.199497 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57744bffdb-m2ffz"] Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.229013 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.425827 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0fed58-e5bc-453b-9918-5d1a44dcf00d" path="/var/lib/kubelet/pods/0b0fed58-e5bc-453b-9918-5d1a44dcf00d/volumes" Nov 29 07:25:51 crc kubenswrapper[4828]: I1129 07:25:51.426764 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992b3577-23a8-4d07-8826-821fce571ebd" path="/var/lib/kubelet/pods/992b3577-23a8-4d07-8826-821fce571ebd/volumes" Nov 29 07:25:52 crc kubenswrapper[4828]: I1129 07:25:52.196597 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerStarted","Data":"be4c0499542808795b84d8632b524298c7c6cd7fdda0a12c584cba3218416fc5"} Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.220057 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerStarted","Data":"cede101ca1b90f11b3bdc12e9982c06dd91a9d316ba82787ff23f08ee0b5eecc"} Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.400465 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.601139 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84b768d757-5f2b9"] Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.603781 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.606823 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.606979 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.620681 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84b768d757-5f2b9"] Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693392 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pdq\" (UniqueName: \"kubernetes.io/projected/3334d09a-df8a-448e-90a3-79f36ee70a07-kube-api-access-47pdq\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693472 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-public-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693530 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-ovndb-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693558 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-httpd-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693603 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-internal-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693663 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.693715 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-combined-ca-bundle\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795007 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-internal-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795077 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795125 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-combined-ca-bundle\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795178 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47pdq\" (UniqueName: \"kubernetes.io/projected/3334d09a-df8a-448e-90a3-79f36ee70a07-kube-api-access-47pdq\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795210 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-public-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795243 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-ovndb-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.795261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-httpd-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.803944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-internal-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.820471 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-ovndb-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.821121 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-combined-ca-bundle\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.821688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-httpd-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.823803 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-config\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.824127 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47pdq\" (UniqueName: \"kubernetes.io/projected/3334d09a-df8a-448e-90a3-79f36ee70a07-kube-api-access-47pdq\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:53 crc kubenswrapper[4828]: I1129 07:25:53.826254 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3334d09a-df8a-448e-90a3-79f36ee70a07-public-tls-certs\") pod \"neutron-84b768d757-5f2b9\" (UID: \"3334d09a-df8a-448e-90a3-79f36ee70a07\") " pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.052871 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.240847 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerStarted","Data":"b884a3b25b7e05c18834638576c07c664d8f0cf7eba93a15a19c1d340f8fbe87"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.243741 4828 generic.go:334] "Generic (PLEG): container finished" podID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerID="cede101ca1b90f11b3bdc12e9982c06dd91a9d316ba82787ff23f08ee0b5eecc" exitCode=0 Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.243805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerDied","Data":"cede101ca1b90f11b3bdc12e9982c06dd91a9d316ba82787ff23f08ee0b5eecc"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.243828 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerStarted","Data":"ff8dff7a3a3039430c2fbe5affc49f684bb3373fc89ad9a5f0a610e68f26b498"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.245025 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.259154 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerStarted","Data":"1d1ab7c820a643ed12e14c32cd13c8701a07bd69cc44c2c1bcae8f4fea4343f0"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.259496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerStarted","Data":"48b1fe4b4404d06a0483ced4af3d0579a95c6349b8150262a655921a3cd362b2"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.259508 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerStarted","Data":"33b182b3b847a05a3ac52d55744e7a769412d04bbf3c5b0b4efdd0311777718e"} Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.259691 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.292139 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" podStartSLOduration=4.292113412 podStartE2EDuration="4.292113412s" podCreationTimestamp="2025-11-29 07:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:54.279056696 +0000 UTC m=+1493.901132754" watchObservedRunningTime="2025-11-29 07:25:54.292113412 +0000 UTC m=+1493.914189470" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.311903 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-578795589b-kkwlj" podStartSLOduration=4.3118812 podStartE2EDuration="4.3118812s" podCreationTimestamp="2025-11-29 07:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:54.308623136 +0000 UTC m=+1493.930699194" watchObservedRunningTime="2025-11-29 07:25:54.3118812 +0000 UTC m=+1493.933957258" Nov 29 07:25:54 crc kubenswrapper[4828]: I1129 07:25:54.687084 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84b768d757-5f2b9"] Nov 29 07:25:55 crc kubenswrapper[4828]: I1129 07:25:55.275200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b768d757-5f2b9" event={"ID":"3334d09a-df8a-448e-90a3-79f36ee70a07","Type":"ContainerStarted","Data":"aa1a894cbaefb6407d3553b9ec1014285e2559184e07c0425c9b6bf1b562e098"} Nov 29 07:25:56 crc kubenswrapper[4828]: I1129 07:25:56.300492 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b768d757-5f2b9" event={"ID":"3334d09a-df8a-448e-90a3-79f36ee70a07","Type":"ContainerStarted","Data":"80203b54d892247c842fbc27582fb7c9817e176fe725f8514e9eb67ceb69822a"} Nov 29 07:25:56 crc kubenswrapper[4828]: I1129 07:25:56.301093 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b768d757-5f2b9" event={"ID":"3334d09a-df8a-448e-90a3-79f36ee70a07","Type":"ContainerStarted","Data":"4b9fc96629f40a2ba2d650b4a150cc8e19ae7886b7f2188aeace9debae639078"} Nov 29 07:25:56 crc kubenswrapper[4828]: I1129 07:25:56.310300 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerStarted","Data":"56e73ae70b3d58618da38bfd87d0d1f57637b929683c888f4da705a9e5d18f42"} Nov 29 07:25:57 crc kubenswrapper[4828]: I1129 07:25:57.316884 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:25:58 crc kubenswrapper[4828]: E1129 07:25:58.872768 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:58 crc kubenswrapper[4828]: E1129 07:25:58.877223 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:58 crc kubenswrapper[4828]: E1129 07:25:58.879949 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 07:25:58 crc kubenswrapper[4828]: E1129 07:25:58.880029 4828 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-999b4d64b-9brmm" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" Nov 29 07:25:59 crc kubenswrapper[4828]: I1129 07:25:59.517236 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84b768d757-5f2b9" podStartSLOduration=6.517214418 podStartE2EDuration="6.517214418s" podCreationTimestamp="2025-11-29 07:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:56.33894637 +0000 UTC m=+1495.961022448" watchObservedRunningTime="2025-11-29 07:25:59.517214418 +0000 UTC m=+1499.139290476" Nov 29 07:25:59 crc kubenswrapper[4828]: I1129 07:25:59.521741 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:00 crc kubenswrapper[4828]: I1129 07:26:00.592432 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:26:00 crc kubenswrapper[4828]: I1129 07:26:00.650525 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:26:00 crc kubenswrapper[4828]: I1129 07:26:00.650849 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="dnsmasq-dns" containerID="cri-o://c54fefd56f9a67404a803deaeb56ff92fed6cb4c1bd9455d529651ed7ace016a" gracePeriod=10 Nov 29 07:26:00 crc kubenswrapper[4828]: I1129 07:26:00.720899 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vrls" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" probeResult="failure" output=< Nov 29 07:26:00 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:26:00 crc kubenswrapper[4828]: > Nov 29 07:26:01 crc kubenswrapper[4828]: I1129 07:26:01.364810 4828 generic.go:334] "Generic (PLEG): container finished" podID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerID="c54fefd56f9a67404a803deaeb56ff92fed6cb4c1bd9455d529651ed7ace016a" exitCode=0 Nov 29 07:26:01 crc kubenswrapper[4828]: I1129 07:26:01.364868 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" event={"ID":"4852ae69-6066-464b-9934-604b2b5ae8a4","Type":"ContainerDied","Data":"c54fefd56f9a67404a803deaeb56ff92fed6cb4c1bd9455d529651ed7ace016a"} Nov 29 07:26:03 crc kubenswrapper[4828]: I1129 07:26:03.867303 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.413517 4828 generic.go:334] "Generic (PLEG): container finished" podID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" exitCode=0 Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.424442 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-999b4d64b-9brmm" event={"ID":"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc","Type":"ContainerDied","Data":"95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508"} Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.816196 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.821000 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.965728 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtggh\" (UniqueName: \"kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh\") pod \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966117 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj5km\" (UniqueName: \"kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966143 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle\") pod \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966229 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966263 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966350 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966405 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom\") pod \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966461 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data\") pod \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\" (UID: \"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966492 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.966536 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb\") pod \"4852ae69-6066-464b-9934-604b2b5ae8a4\" (UID: \"4852ae69-6066-464b-9934-604b2b5ae8a4\") " Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.972025 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" (UID: "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.973513 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km" (OuterVolumeSpecName: "kube-api-access-qj5km") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "kube-api-access-qj5km". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.979629 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh" (OuterVolumeSpecName: "kube-api-access-dtggh") pod "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" (UID: "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc"). InnerVolumeSpecName "kube-api-access-dtggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4828]: I1129 07:26:05.999558 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" (UID: "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.027720 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.027814 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.039447 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data" (OuterVolumeSpecName: "config-data") pod "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" (UID: "28c4e1bf-c5c6-44af-97c5-e035b3e9aafc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.045242 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config" (OuterVolumeSpecName: "config") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.051866 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068380 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068417 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtggh\" (UniqueName: \"kubernetes.io/projected/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-kube-api-access-dtggh\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068432 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj5km\" (UniqueName: \"kubernetes.io/projected/4852ae69-6066-464b-9934-604b2b5ae8a4-kube-api-access-qj5km\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068445 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068456 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068467 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068478 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068488 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.068498 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.074322 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4852ae69-6066-464b-9934-604b2b5ae8a4" (UID: "4852ae69-6066-464b-9934-604b2b5ae8a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.171222 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4852ae69-6066-464b-9934-604b2b5ae8a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.425245 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" event={"ID":"4852ae69-6066-464b-9934-604b2b5ae8a4","Type":"ContainerDied","Data":"576b8052aae0391a8d1df4f0c8a80ecf875bd851ed96d5afab22b41c6d257ccf"} Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.425296 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5585959-gnl5p" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.425318 4828 scope.go:117] "RemoveContainer" containerID="c54fefd56f9a67404a803deaeb56ff92fed6cb4c1bd9455d529651ed7ace016a" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.435073 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdknn" event={"ID":"33043721-20af-4165-8035-2a4fbe295eb3","Type":"ContainerStarted","Data":"502d5ee4c39b3cefe8b609992d057b19b7ab830f3c89318e6332746c3f275db8"} Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.441815 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-999b4d64b-9brmm" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.443123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-999b4d64b-9brmm" event={"ID":"28c4e1bf-c5c6-44af-97c5-e035b3e9aafc","Type":"ContainerDied","Data":"1410b5797a1cd7dc270dc68394ef441f131ae5d09eaadea1558b55a6b516d305"} Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448221 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerStarted","Data":"453218183e4bda76d8abf3244f08ca3767a43dd3388d391f8c33e067ec864666"} Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448635 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-central-agent" containerID="cri-o://90a72fcd49d483e28014d981981a6d0f2d26d67b6f5b5957e4b234f8f6a88506" gracePeriod=30 Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448650 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448647 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="sg-core" containerID="cri-o://56e73ae70b3d58618da38bfd87d0d1f57637b929683c888f4da705a9e5d18f42" gracePeriod=30 Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448669 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-notification-agent" containerID="cri-o://b884a3b25b7e05c18834638576c07c664d8f0cf7eba93a15a19c1d340f8fbe87" gracePeriod=30 Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.448666 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="proxy-httpd" containerID="cri-o://453218183e4bda76d8abf3244f08ca3767a43dd3388d391f8c33e067ec864666" gracePeriod=30 Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.453721 4828 scope.go:117] "RemoveContainer" containerID="d43c2d7a14092bf1d008745d1d65da292bed822ed50ba4c4f1dfe2fd4f1e9a6b" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.467008 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-wdknn" podStartSLOduration=2.106210313 podStartE2EDuration="19.466985715s" podCreationTimestamp="2025-11-29 07:25:47 +0000 UTC" firstStartedPulling="2025-11-29 07:25:48.071657923 +0000 UTC m=+1487.693733981" lastFinishedPulling="2025-11-29 07:26:05.432433325 +0000 UTC m=+1505.054509383" observedRunningTime="2025-11-29 07:26:06.454047763 +0000 UTC m=+1506.076123821" watchObservedRunningTime="2025-11-29 07:26:06.466985715 +0000 UTC m=+1506.089061773" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.497205 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.857293661 podStartE2EDuration="20.497181931s" podCreationTimestamp="2025-11-29 07:25:46 +0000 UTC" firstStartedPulling="2025-11-29 07:25:48.009942737 +0000 UTC m=+1487.632018795" lastFinishedPulling="2025-11-29 07:26:04.649831007 +0000 UTC m=+1504.271907065" observedRunningTime="2025-11-29 07:26:06.494448961 +0000 UTC m=+1506.116525029" watchObservedRunningTime="2025-11-29 07:26:06.497181931 +0000 UTC m=+1506.119257989" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.528346 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.534926 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d5585959-gnl5p"] Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.537699 4828 scope.go:117] "RemoveContainer" containerID="95cf632a677528bf161566975719001239ffeb10f9f291093a7f2c3cc8074508" Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.546463 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:26:06 crc kubenswrapper[4828]: I1129 07:26:06.558962 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-999b4d64b-9brmm"] Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.435151 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" path="/var/lib/kubelet/pods/28c4e1bf-c5c6-44af-97c5-e035b3e9aafc/volumes" Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.436231 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" path="/var/lib/kubelet/pods/4852ae69-6066-464b-9934-604b2b5ae8a4/volumes" Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458502 4828 generic.go:334] "Generic (PLEG): container finished" podID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerID="453218183e4bda76d8abf3244f08ca3767a43dd3388d391f8c33e067ec864666" exitCode=0 Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458542 4828 generic.go:334] "Generic (PLEG): container finished" podID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerID="56e73ae70b3d58618da38bfd87d0d1f57637b929683c888f4da705a9e5d18f42" exitCode=2 Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458552 4828 generic.go:334] "Generic (PLEG): container finished" podID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerID="b884a3b25b7e05c18834638576c07c664d8f0cf7eba93a15a19c1d340f8fbe87" exitCode=0 Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458563 4828 generic.go:334] "Generic (PLEG): container finished" podID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerID="90a72fcd49d483e28014d981981a6d0f2d26d67b6f5b5957e4b234f8f6a88506" exitCode=0 Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458610 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerDied","Data":"453218183e4bda76d8abf3244f08ca3767a43dd3388d391f8c33e067ec864666"} Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458656 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerDied","Data":"56e73ae70b3d58618da38bfd87d0d1f57637b929683c888f4da705a9e5d18f42"} Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458667 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerDied","Data":"b884a3b25b7e05c18834638576c07c664d8f0cf7eba93a15a19c1d340f8fbe87"} Nov 29 07:26:07 crc kubenswrapper[4828]: I1129 07:26:07.458676 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerDied","Data":"90a72fcd49d483e28014d981981a6d0f2d26d67b6f5b5957e4b234f8f6a88506"} Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.501010 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ea3f184-e2a4-42b5-8215-3317a6b0a50e","Type":"ContainerDied","Data":"014561d0493301fe6f4ac3f4764457718bd8e8981841a2a22aac8d312a3cc43f"} Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.501401 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014561d0493301fe6f4ac3f4764457718bd8e8981841a2a22aac8d312a3cc43f" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.505909 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.539103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.539192 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9skq\" (UniqueName: \"kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.539301 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.539980 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540019 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540054 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts\") pod \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\" (UID: \"8ea3f184-e2a4-42b5-8215-3317a6b0a50e\") " Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540362 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540603 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540880 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.540897 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.552450 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts" (OuterVolumeSpecName: "scripts") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.556643 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq" (OuterVolumeSpecName: "kube-api-access-t9skq") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "kube-api-access-t9skq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.591576 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.643239 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.643287 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.643298 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9skq\" (UniqueName: \"kubernetes.io/projected/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-kube-api-access-t9skq\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.645564 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.662358 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data" (OuterVolumeSpecName: "config-data") pod "8ea3f184-e2a4-42b5-8215-3317a6b0a50e" (UID: "8ea3f184-e2a4-42b5-8215-3317a6b0a50e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.712118 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.744631 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.744667 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea3f184-e2a4-42b5-8215-3317a6b0a50e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.758150 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:26:09 crc kubenswrapper[4828]: I1129 07:26:09.951134 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.509804 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.542760 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.553248 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.605669 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.607966 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="init" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608002 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="init" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608039 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-central-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608049 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-central-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608067 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="dnsmasq-dns" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608075 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="dnsmasq-dns" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608092 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-notification-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608099 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-notification-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608240 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="proxy-httpd" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608248 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="proxy-httpd" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608299 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="sg-core" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608306 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="sg-core" Nov 29 07:26:10 crc kubenswrapper[4828]: E1129 07:26:10.608313 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608318 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608619 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-central-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608641 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="proxy-httpd" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608655 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c4e1bf-c5c6-44af-97c5-e035b3e9aafc" containerName="heat-engine" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608667 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4852ae69-6066-464b-9934-604b2b5ae8a4" containerName="dnsmasq-dns" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608679 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="ceilometer-notification-agent" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.608688 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" containerName="sg-core" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.618662 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.621066 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.624485 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.624861 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659604 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659674 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgvf\" (UniqueName: \"kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659700 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659744 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659912 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.659964 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.660091 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761244 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761354 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761422 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761469 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmgvf\" (UniqueName: \"kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761517 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.761547 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.762198 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.763166 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.767149 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.767284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.767683 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.778792 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.782322 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmgvf\" (UniqueName: \"kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf\") pod \"ceilometer-0\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " pod="openstack/ceilometer-0" Nov 29 07:26:10 crc kubenswrapper[4828]: I1129 07:26:10.938653 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:11 crc kubenswrapper[4828]: I1129 07:26:11.422640 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea3f184-e2a4-42b5-8215-3317a6b0a50e" path="/var/lib/kubelet/pods/8ea3f184-e2a4-42b5-8215-3317a6b0a50e/volumes" Nov 29 07:26:11 crc kubenswrapper[4828]: I1129 07:26:11.460319 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:11 crc kubenswrapper[4828]: I1129 07:26:11.535706 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vrls" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" containerID="cri-o://c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053" gracePeriod=2 Nov 29 07:26:11 crc kubenswrapper[4828]: I1129 07:26:11.536064 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerStarted","Data":"d0da35bb1b17993ece60fcafce6484904786237dcda2f306377e16894cb9c679"} Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.051571 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.085070 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities\") pod \"92f0fb97-210f-4cb2-82df-a802745d9cb0\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.085239 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content\") pod \"92f0fb97-210f-4cb2-82df-a802745d9cb0\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.085426 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w865x\" (UniqueName: \"kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x\") pod \"92f0fb97-210f-4cb2-82df-a802745d9cb0\" (UID: \"92f0fb97-210f-4cb2-82df-a802745d9cb0\") " Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.086049 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities" (OuterVolumeSpecName: "utilities") pod "92f0fb97-210f-4cb2-82df-a802745d9cb0" (UID: "92f0fb97-210f-4cb2-82df-a802745d9cb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.114937 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x" (OuterVolumeSpecName: "kube-api-access-w865x") pod "92f0fb97-210f-4cb2-82df-a802745d9cb0" (UID: "92f0fb97-210f-4cb2-82df-a802745d9cb0"). InnerVolumeSpecName "kube-api-access-w865x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.187216 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.187280 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w865x\" (UniqueName: \"kubernetes.io/projected/92f0fb97-210f-4cb2-82df-a802745d9cb0-kube-api-access-w865x\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.274167 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92f0fb97-210f-4cb2-82df-a802745d9cb0" (UID: "92f0fb97-210f-4cb2-82df-a802745d9cb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.291529 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f0fb97-210f-4cb2-82df-a802745d9cb0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.548072 4828 generic.go:334] "Generic (PLEG): container finished" podID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerID="c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053" exitCode=0 Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.548145 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrls" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.548173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerDied","Data":"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053"} Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.548576 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrls" event={"ID":"92f0fb97-210f-4cb2-82df-a802745d9cb0","Type":"ContainerDied","Data":"f8c8c34b4643417500b407f433bbf573e604952810ff4b283e8c8d34c1da1a63"} Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.548604 4828 scope.go:117] "RemoveContainer" containerID="c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.550411 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerStarted","Data":"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072"} Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.568248 4828 scope.go:117] "RemoveContainer" containerID="22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.584414 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.594426 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vrls"] Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.616897 4828 scope.go:117] "RemoveContainer" containerID="444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.657372 4828 scope.go:117] "RemoveContainer" containerID="c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053" Nov 29 07:26:12 crc kubenswrapper[4828]: E1129 07:26:12.657901 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053\": container with ID starting with c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053 not found: ID does not exist" containerID="c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.657944 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053"} err="failed to get container status \"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053\": rpc error: code = NotFound desc = could not find container \"c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053\": container with ID starting with c3902823655c9fe7939f8f2ea1edab8ed053c0fdf6488689a5d503a1f09fa053 not found: ID does not exist" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.657978 4828 scope.go:117] "RemoveContainer" containerID="22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf" Nov 29 07:26:12 crc kubenswrapper[4828]: E1129 07:26:12.658226 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf\": container with ID starting with 22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf not found: ID does not exist" containerID="22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.658256 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf"} err="failed to get container status \"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf\": rpc error: code = NotFound desc = could not find container \"22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf\": container with ID starting with 22e5f38ef9e4d97ed85f9894dd3feb9dd432a2db50f4fc549f95d90b7022acbf not found: ID does not exist" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.658289 4828 scope.go:117] "RemoveContainer" containerID="444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c" Nov 29 07:26:12 crc kubenswrapper[4828]: E1129 07:26:12.658722 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c\": container with ID starting with 444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c not found: ID does not exist" containerID="444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c" Nov 29 07:26:12 crc kubenswrapper[4828]: I1129 07:26:12.658765 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c"} err="failed to get container status \"444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c\": rpc error: code = NotFound desc = could not find container \"444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c\": container with ID starting with 444d9486cd880d27165755d1f63579521a06c629ca06f8cc6d3998358040299c not found: ID does not exist" Nov 29 07:26:13 crc kubenswrapper[4828]: I1129 07:26:13.424554 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" path="/var/lib/kubelet/pods/92f0fb97-210f-4cb2-82df-a802745d9cb0/volumes" Nov 29 07:26:13 crc kubenswrapper[4828]: I1129 07:26:13.588489 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerStarted","Data":"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109"} Nov 29 07:26:14 crc kubenswrapper[4828]: I1129 07:26:14.605897 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerStarted","Data":"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453"} Nov 29 07:26:16 crc kubenswrapper[4828]: I1129 07:26:16.626659 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerStarted","Data":"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997"} Nov 29 07:26:16 crc kubenswrapper[4828]: I1129 07:26:16.628631 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:16 crc kubenswrapper[4828]: I1129 07:26:16.665685 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.663131049 podStartE2EDuration="6.665662335s" podCreationTimestamp="2025-11-29 07:26:10 +0000 UTC" firstStartedPulling="2025-11-29 07:26:11.465508189 +0000 UTC m=+1511.087584237" lastFinishedPulling="2025-11-29 07:26:15.468039465 +0000 UTC m=+1515.090115523" observedRunningTime="2025-11-29 07:26:16.6588368 +0000 UTC m=+1516.280912868" watchObservedRunningTime="2025-11-29 07:26:16.665662335 +0000 UTC m=+1516.287738393" Nov 29 07:26:17 crc kubenswrapper[4828]: I1129 07:26:17.610571 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:17 crc kubenswrapper[4828]: I1129 07:26:17.611069 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-log" containerID="cri-o://4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c" gracePeriod=30 Nov 29 07:26:17 crc kubenswrapper[4828]: I1129 07:26:17.611214 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-httpd" containerID="cri-o://17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003" gracePeriod=30 Nov 29 07:26:18 crc kubenswrapper[4828]: I1129 07:26:18.647005 4828 generic.go:334] "Generic (PLEG): container finished" podID="17156cfb-ec83-47db-955b-44f5045179e8" containerID="4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c" exitCode=143 Nov 29 07:26:18 crc kubenswrapper[4828]: I1129 07:26:18.647115 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerDied","Data":"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c"} Nov 29 07:26:19 crc kubenswrapper[4828]: I1129 07:26:19.171326 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:19 crc kubenswrapper[4828]: I1129 07:26:19.171983 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-log" containerID="cri-o://c62ae5c9596d3334c6233ec70ed886ac71d9d21882603ace7ef5e193a9ec13b5" gracePeriod=30 Nov 29 07:26:19 crc kubenswrapper[4828]: I1129 07:26:19.172150 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-httpd" containerID="cri-o://b7b44572a0f02a5f5f6641ea4e39ebe00423ae62a08bf8e3342f933c94616f77" gracePeriod=30 Nov 29 07:26:19 crc kubenswrapper[4828]: I1129 07:26:19.658993 4828 generic.go:334] "Generic (PLEG): container finished" podID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerID="c62ae5c9596d3334c6233ec70ed886ac71d9d21882603ace7ef5e193a9ec13b5" exitCode=143 Nov 29 07:26:19 crc kubenswrapper[4828]: I1129 07:26:19.659035 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerDied","Data":"c62ae5c9596d3334c6233ec70ed886ac71d9d21882603ace7ef5e193a9ec13b5"} Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.637800 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.638320 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-central-agent" containerID="cri-o://b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072" gracePeriod=30 Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.638447 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="sg-core" containerID="cri-o://14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453" gracePeriod=30 Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.638473 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-notification-agent" containerID="cri-o://796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109" gracePeriod=30 Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.638420 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="proxy-httpd" containerID="cri-o://cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997" gracePeriod=30 Nov 29 07:26:20 crc kubenswrapper[4828]: I1129 07:26:20.751332 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.519865 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679331 4828 generic.go:334] "Generic (PLEG): container finished" podID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerID="cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997" exitCode=0 Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679367 4828 generic.go:334] "Generic (PLEG): container finished" podID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerID="14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453" exitCode=2 Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679374 4828 generic.go:334] "Generic (PLEG): container finished" podID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerID="796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109" exitCode=0 Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679422 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerDied","Data":"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997"} Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679482 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerDied","Data":"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453"} Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.679498 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerDied","Data":"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109"} Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.681849 4828 generic.go:334] "Generic (PLEG): container finished" podID="17156cfb-ec83-47db-955b-44f5045179e8" containerID="17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003" exitCode=0 Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.681882 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerDied","Data":"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003"} Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.681905 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.681925 4828 scope.go:117] "RemoveContainer" containerID="17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.681910 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"17156cfb-ec83-47db-955b-44f5045179e8","Type":"ContainerDied","Data":"1af37e5634fd5e5e30f3adf849f4a319cc82ca909ed0de28d5fe3cc382eb7722"} Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.684813 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.684894 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzmn\" (UniqueName: \"kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.684973 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685042 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685078 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685102 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685148 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685172 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"17156cfb-ec83-47db-955b-44f5045179e8\" (UID: \"17156cfb-ec83-47db-955b-44f5045179e8\") " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685711 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs" (OuterVolumeSpecName: "logs") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.685769 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.691409 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.692646 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn" (OuterVolumeSpecName: "kube-api-access-7zzmn") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "kube-api-access-7zzmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.694000 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts" (OuterVolumeSpecName: "scripts") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.709192 4828 scope.go:117] "RemoveContainer" containerID="4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.720596 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.727943 4828 scope.go:117] "RemoveContainer" containerID="17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003" Nov 29 07:26:21 crc kubenswrapper[4828]: E1129 07:26:21.728538 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003\": container with ID starting with 17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003 not found: ID does not exist" containerID="17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.728596 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003"} err="failed to get container status \"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003\": rpc error: code = NotFound desc = could not find container \"17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003\": container with ID starting with 17d0e2fd6522341907e3240c838e290e357aec11784787f7ffdd2291e16d0003 not found: ID does not exist" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.728629 4828 scope.go:117] "RemoveContainer" containerID="4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c" Nov 29 07:26:21 crc kubenswrapper[4828]: E1129 07:26:21.728961 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c\": container with ID starting with 4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c not found: ID does not exist" containerID="4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.728985 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c"} err="failed to get container status \"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c\": rpc error: code = NotFound desc = could not find container \"4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c\": container with ID starting with 4d4959de6c962437c90a0ce964a42c61e5111d50625e761b8ed4d74c7891148c not found: ID does not exist" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.741799 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.749543 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data" (OuterVolumeSpecName: "config-data") pod "17156cfb-ec83-47db-955b-44f5045179e8" (UID: "17156cfb-ec83-47db-955b-44f5045179e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787186 4828 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787218 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787228 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787236 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787247 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17156cfb-ec83-47db-955b-44f5045179e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787290 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787300 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17156cfb-ec83-47db-955b-44f5045179e8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.787309 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzmn\" (UniqueName: \"kubernetes.io/projected/17156cfb-ec83-47db-955b-44f5045179e8-kube-api-access-7zzmn\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.817710 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 07:26:21 crc kubenswrapper[4828]: I1129 07:26:21.889025 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.033065 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.055973 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066250 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:22 crc kubenswrapper[4828]: E1129 07:26:22.066816 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="extract-content" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066836 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="extract-content" Nov 29 07:26:22 crc kubenswrapper[4828]: E1129 07:26:22.066858 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="extract-utilities" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066866 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="extract-utilities" Nov 29 07:26:22 crc kubenswrapper[4828]: E1129 07:26:22.066906 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066915 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" Nov 29 07:26:22 crc kubenswrapper[4828]: E1129 07:26:22.066932 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-httpd" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066939 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-httpd" Nov 29 07:26:22 crc kubenswrapper[4828]: E1129 07:26:22.066948 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-log" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.066955 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-log" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.067211 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-httpd" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.067231 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="17156cfb-ec83-47db-955b-44f5045179e8" containerName="glance-log" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.067243 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f0fb97-210f-4cb2-82df-a802745d9cb0" containerName="registry-server" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.068711 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.072808 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.073067 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.086427 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194468 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194551 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194577 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr9z5\" (UniqueName: \"kubernetes.io/projected/ecfa61e1-38ee-4cc5-80ac-093b1880135a-kube-api-access-lr9z5\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194722 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-logs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194746 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.194840 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.195060 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296613 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296671 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296717 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296784 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr9z5\" (UniqueName: \"kubernetes.io/projected/ecfa61e1-38ee-4cc5-80ac-093b1880135a-kube-api-access-lr9z5\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296834 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-logs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296868 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296908 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.296956 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.297234 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.297610 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecfa61e1-38ee-4cc5-80ac-093b1880135a-logs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.302376 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.302376 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.303380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.305199 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecfa61e1-38ee-4cc5-80ac-093b1880135a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.317511 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr9z5\" (UniqueName: \"kubernetes.io/projected/ecfa61e1-38ee-4cc5-80ac-093b1880135a-kube-api-access-lr9z5\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.345841 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ecfa61e1-38ee-4cc5-80ac-093b1880135a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.398672 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.705078 4828 generic.go:334] "Generic (PLEG): container finished" podID="33043721-20af-4165-8035-2a4fbe295eb3" containerID="502d5ee4c39b3cefe8b609992d057b19b7ab830f3c89318e6332746c3f275db8" exitCode=0 Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.705189 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdknn" event={"ID":"33043721-20af-4165-8035-2a4fbe295eb3","Type":"ContainerDied","Data":"502d5ee4c39b3cefe8b609992d057b19b7ab830f3c89318e6332746c3f275db8"} Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.708227 4828 generic.go:334] "Generic (PLEG): container finished" podID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerID="b7b44572a0f02a5f5f6641ea4e39ebe00423ae62a08bf8e3342f933c94616f77" exitCode=0 Nov 29 07:26:22 crc kubenswrapper[4828]: I1129 07:26:22.708275 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerDied","Data":"b7b44572a0f02a5f5f6641ea4e39ebe00423ae62a08bf8e3342f933c94616f77"} Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.080031 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:23 crc kubenswrapper[4828]: W1129 07:26:23.084439 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecfa61e1_38ee_4cc5_80ac_093b1880135a.slice/crio-8903879c1dc2c05a656f78f8def7c33b5d95c2dd3035bc8ea84545a30c9173f1 WatchSource:0}: Error finding container 8903879c1dc2c05a656f78f8def7c33b5d95c2dd3035bc8ea84545a30c9173f1: Status 404 returned error can't find the container with id 8903879c1dc2c05a656f78f8def7c33b5d95c2dd3035bc8ea84545a30c9173f1 Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.210379 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316258 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316384 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316407 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316448 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316518 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316534 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316586 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.316678 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhg7p\" (UniqueName: \"kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p\") pod \"319d7dd8-e096-41a6-8394-fed7f944e1ae\" (UID: \"319d7dd8-e096-41a6-8394-fed7f944e1ae\") " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.318600 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs" (OuterVolumeSpecName: "logs") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.318870 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.323850 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts" (OuterVolumeSpecName: "scripts") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.323972 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p" (OuterVolumeSpecName: "kube-api-access-mhg7p") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "kube-api-access-mhg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.325016 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.352534 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.386961 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data" (OuterVolumeSpecName: "config-data") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.404601 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "319d7dd8-e096-41a6-8394-fed7f944e1ae" (UID: "319d7dd8-e096-41a6-8394-fed7f944e1ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419148 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419190 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419199 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419208 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/319d7dd8-e096-41a6-8394-fed7f944e1ae-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419217 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhg7p\" (UniqueName: \"kubernetes.io/projected/319d7dd8-e096-41a6-8394-fed7f944e1ae-kube-api-access-mhg7p\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419230 4828 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419238 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/319d7dd8-e096-41a6-8394-fed7f944e1ae-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.419278 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.425408 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17156cfb-ec83-47db-955b-44f5045179e8" path="/var/lib/kubelet/pods/17156cfb-ec83-47db-955b-44f5045179e8/volumes" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.440214 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.520992 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.722252 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"319d7dd8-e096-41a6-8394-fed7f944e1ae","Type":"ContainerDied","Data":"919ead4c35cf27d001138199848080fb757693e4076b5f1a2deded54e5139bfe"} Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.722316 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.722328 4828 scope.go:117] "RemoveContainer" containerID="b7b44572a0f02a5f5f6641ea4e39ebe00423ae62a08bf8e3342f933c94616f77" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.724902 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecfa61e1-38ee-4cc5-80ac-093b1880135a","Type":"ContainerStarted","Data":"8903879c1dc2c05a656f78f8def7c33b5d95c2dd3035bc8ea84545a30c9173f1"} Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.752434 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.766377 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.775673 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:23 crc kubenswrapper[4828]: E1129 07:26:23.776142 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-log" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.776162 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-log" Nov 29 07:26:23 crc kubenswrapper[4828]: E1129 07:26:23.776173 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-httpd" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.776179 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-httpd" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.776383 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-httpd" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.776406 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" containerName="glance-log" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.780643 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.783711 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.783923 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.791709 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.888790 4828 scope.go:117] "RemoveContainer" containerID="c62ae5c9596d3334c6233ec70ed886ac71d9d21882603ace7ef5e193a9ec13b5" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.928732 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.928805 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.928841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.928914 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.929002 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjbx\" (UniqueName: \"kubernetes.io/projected/04bda42c-062d-483d-872e-bd260cf2b4b4-kube-api-access-xwjbx\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.929069 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.929121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:23 crc kubenswrapper[4828]: I1129 07:26:23.929149 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032448 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032513 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032544 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032603 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032625 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032644 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032694 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.032767 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjbx\" (UniqueName: \"kubernetes.io/projected/04bda42c-062d-483d-872e-bd260cf2b4b4-kube-api-access-xwjbx\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.033012 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.033307 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda42c-062d-483d-872e-bd260cf2b4b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.033336 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.038534 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.039728 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.040956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.045820 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda42c-062d-483d-872e-bd260cf2b4b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.053664 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjbx\" (UniqueName: \"kubernetes.io/projected/04bda42c-062d-483d-872e-bd260cf2b4b4-kube-api-access-xwjbx\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.062908 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"04bda42c-062d-483d-872e-bd260cf2b4b4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.072628 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84b768d757-5f2b9" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.102051 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.181410 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.199231 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.201478 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-578795589b-kkwlj" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-api" containerID="cri-o://48b1fe4b4404d06a0483ced4af3d0579a95c6349b8150262a655921a3cd362b2" gracePeriod=30 Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.201685 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-578795589b-kkwlj" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-httpd" containerID="cri-o://1d1ab7c820a643ed12e14c32cd13c8701a07bd69cc44c2c1bcae8f4fea4343f0" gracePeriod=30 Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.240058 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts\") pod \"33043721-20af-4165-8035-2a4fbe295eb3\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.240345 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle\") pod \"33043721-20af-4165-8035-2a4fbe295eb3\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.240399 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data\") pod \"33043721-20af-4165-8035-2a4fbe295eb3\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.240473 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4jjh\" (UniqueName: \"kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh\") pod \"33043721-20af-4165-8035-2a4fbe295eb3\" (UID: \"33043721-20af-4165-8035-2a4fbe295eb3\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.247323 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh" (OuterVolumeSpecName: "kube-api-access-g4jjh") pod "33043721-20af-4165-8035-2a4fbe295eb3" (UID: "33043721-20af-4165-8035-2a4fbe295eb3"). InnerVolumeSpecName "kube-api-access-g4jjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.248814 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts" (OuterVolumeSpecName: "scripts") pod "33043721-20af-4165-8035-2a4fbe295eb3" (UID: "33043721-20af-4165-8035-2a4fbe295eb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.276086 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data" (OuterVolumeSpecName: "config-data") pod "33043721-20af-4165-8035-2a4fbe295eb3" (UID: "33043721-20af-4165-8035-2a4fbe295eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.313096 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33043721-20af-4165-8035-2a4fbe295eb3" (UID: "33043721-20af-4165-8035-2a4fbe295eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.343046 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.343082 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.343096 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4jjh\" (UniqueName: \"kubernetes.io/projected/33043721-20af-4165-8035-2a4fbe295eb3-kube-api-access-g4jjh\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.343109 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33043721-20af-4165-8035-2a4fbe295eb3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.673980 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.755907 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.755969 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmgvf\" (UniqueName: \"kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756134 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756220 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756228 4828 generic.go:334] "Generic (PLEG): container finished" podID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerID="b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072" exitCode=0 Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756259 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756295 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts\") pod \"59be363e-f320-4a44-9482-e25c4a3a6fb8\" (UID: \"59be363e-f320-4a44-9482-e25c4a3a6fb8\") " Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.756449 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.757359 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerDied","Data":"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072"} Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.757400 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59be363e-f320-4a44-9482-e25c4a3a6fb8","Type":"ContainerDied","Data":"d0da35bb1b17993ece60fcafce6484904786237dcda2f306377e16894cb9c679"} Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.757423 4828 scope.go:117] "RemoveContainer" containerID="cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.757533 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.757786 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.762754 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecfa61e1-38ee-4cc5-80ac-093b1880135a","Type":"ContainerStarted","Data":"57b40d29fc2d85581d92a75cff7cc5582c778de732bf24e975bf323dc10bd251"} Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.770807 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts" (OuterVolumeSpecName: "scripts") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.771305 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf" (OuterVolumeSpecName: "kube-api-access-lmgvf") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "kube-api-access-lmgvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.775160 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerID="1d1ab7c820a643ed12e14c32cd13c8701a07bd69cc44c2c1bcae8f4fea4343f0" exitCode=0 Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.775313 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerDied","Data":"1d1ab7c820a643ed12e14c32cd13c8701a07bd69cc44c2c1bcae8f4fea4343f0"} Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.777146 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdknn" event={"ID":"33043721-20af-4165-8035-2a4fbe295eb3","Type":"ContainerDied","Data":"4e3473174c6b144290a3bcc83ff71d938460898b78dc8d303b8edf657db87e81"} Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.777179 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3473174c6b144290a3bcc83ff71d938460898b78dc8d303b8edf657db87e81" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.777231 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdknn" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.792528 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.798089 4828 scope.go:117] "RemoveContainer" containerID="14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453" Nov 29 07:26:24 crc kubenswrapper[4828]: W1129 07:26:24.804437 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04bda42c_062d_483d_872e_bd260cf2b4b4.slice/crio-4afb0753fef32e11c4f7b36aed5e6162f30af120b5aa271bd9d45917877db7dc WatchSource:0}: Error finding container 4afb0753fef32e11c4f7b36aed5e6162f30af120b5aa271bd9d45917877db7dc: Status 404 returned error can't find the container with id 4afb0753fef32e11c4f7b36aed5e6162f30af120b5aa271bd9d45917877db7dc Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.806475 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.839686 4828 scope.go:117] "RemoveContainer" containerID="796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851166 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.851742 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-central-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851763 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-central-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.851780 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="sg-core" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851788 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="sg-core" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.851813 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-notification-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851821 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-notification-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.851837 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33043721-20af-4165-8035-2a4fbe295eb3" containerName="nova-cell0-conductor-db-sync" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851845 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="33043721-20af-4165-8035-2a4fbe295eb3" containerName="nova-cell0-conductor-db-sync" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.851863 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="proxy-httpd" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.851869 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="proxy-httpd" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.852127 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-central-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.852147 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="ceilometer-notification-agent" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.852163 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="proxy-httpd" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.852223 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" containerName="sg-core" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.852242 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="33043721-20af-4165-8035-2a4fbe295eb3" containerName="nova-cell0-conductor-db-sync" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.853249 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.856161 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.856416 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfdkq" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.858068 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.858104 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmgvf\" (UniqueName: \"kubernetes.io/projected/59be363e-f320-4a44-9482-e25c4a3a6fb8-kube-api-access-lmgvf\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.858116 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.858127 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59be363e-f320-4a44-9482-e25c4a3a6fb8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.858139 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.876464 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.886294 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.898002 4828 scope.go:117] "RemoveContainer" containerID="b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.918487 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data" (OuterVolumeSpecName: "config-data") pod "59be363e-f320-4a44-9482-e25c4a3a6fb8" (UID: "59be363e-f320-4a44-9482-e25c4a3a6fb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.930846 4828 scope.go:117] "RemoveContainer" containerID="cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.931655 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997\": container with ID starting with cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997 not found: ID does not exist" containerID="cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.931693 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997"} err="failed to get container status \"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997\": rpc error: code = NotFound desc = could not find container \"cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997\": container with ID starting with cc4d74cff9e130dc97bcf3dc4000e73944edc68dff0c590eba0e1adcfe67f997 not found: ID does not exist" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.931714 4828 scope.go:117] "RemoveContainer" containerID="14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.932080 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453\": container with ID starting with 14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453 not found: ID does not exist" containerID="14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.932102 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453"} err="failed to get container status \"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453\": rpc error: code = NotFound desc = could not find container \"14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453\": container with ID starting with 14d0b096dd4b10ee8e5917672e8d23a05e8f64c68e91a6ec657fc93e23977453 not found: ID does not exist" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.932115 4828 scope.go:117] "RemoveContainer" containerID="796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.932481 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109\": container with ID starting with 796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109 not found: ID does not exist" containerID="796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.932503 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109"} err="failed to get container status \"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109\": rpc error: code = NotFound desc = could not find container \"796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109\": container with ID starting with 796a500b84f78382fa5e3902766e5375998cfc36da03e047b26c02a0df888109 not found: ID does not exist" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.932518 4828 scope.go:117] "RemoveContainer" containerID="b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072" Nov 29 07:26:24 crc kubenswrapper[4828]: E1129 07:26:24.932797 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072\": container with ID starting with b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072 not found: ID does not exist" containerID="b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.932847 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072"} err="failed to get container status \"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072\": rpc error: code = NotFound desc = could not find container \"b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072\": container with ID starting with b83c55cfcc09b68509b5ef760412e0d09a75c65ecdc75237ad3b3dc56b5ca072 not found: ID does not exist" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.960141 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq2v9\" (UniqueName: \"kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.960286 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.960335 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.960416 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:24 crc kubenswrapper[4828]: I1129 07:26:24.960430 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59be363e-f320-4a44-9482-e25c4a3a6fb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.061775 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.062782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq2v9\" (UniqueName: \"kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.062994 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.065929 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.067467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.087200 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq2v9\" (UniqueName: \"kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9\") pod \"nova-cell0-conductor-0\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.185714 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.354981 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.370526 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.403640 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.415798 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.421288 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.421647 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.449365 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319d7dd8-e096-41a6-8394-fed7f944e1ae" path="/var/lib/kubelet/pods/319d7dd8-e096-41a6-8394-fed7f944e1ae/volumes" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.454024 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59be363e-f320-4a44-9482-e25c4a3a6fb8" path="/var/lib/kubelet/pods/59be363e-f320-4a44-9482-e25c4a3a6fb8/volumes" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.455110 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.577936 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578013 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578070 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lfm\" (UniqueName: \"kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578124 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578138 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.578223 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683425 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683610 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lfm\" (UniqueName: \"kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.683765 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.684222 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.684365 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.684531 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.688310 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.689419 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.689849 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.691128 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.702168 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lfm\" (UniqueName: \"kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm\") pod \"ceilometer-0\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.743929 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.766080 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.788174 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04bda42c-062d-483d-872e-bd260cf2b4b4","Type":"ContainerStarted","Data":"d21a950fb0e323eb1868242abd991add6bd8f5876898497dacef1ce972f0fd17"} Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.788630 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04bda42c-062d-483d-872e-bd260cf2b4b4","Type":"ContainerStarted","Data":"4afb0753fef32e11c4f7b36aed5e6162f30af120b5aa271bd9d45917877db7dc"} Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.797603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecfa61e1-38ee-4cc5-80ac-093b1880135a","Type":"ContainerStarted","Data":"c929670088ee44dd472943a51b1bf9e9a4fa63c9be607438fb556784dcbb953e"} Nov 29 07:26:25 crc kubenswrapper[4828]: W1129 07:26:25.813114 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3869e659_d33a_41bf_a89b_5cb222280fac.slice/crio-cdd8c3a3573eed2aa991f22d6f0dbc51f17eb56b5cfd2fa4f789b6b0ebf5ec40 WatchSource:0}: Error finding container cdd8c3a3573eed2aa991f22d6f0dbc51f17eb56b5cfd2fa4f789b6b0ebf5ec40: Status 404 returned error can't find the container with id cdd8c3a3573eed2aa991f22d6f0dbc51f17eb56b5cfd2fa4f789b6b0ebf5ec40 Nov 29 07:26:25 crc kubenswrapper[4828]: I1129 07:26:25.825898 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.825875443 podStartE2EDuration="3.825875443s" podCreationTimestamp="2025-11-29 07:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:25.81873377 +0000 UTC m=+1525.440809828" watchObservedRunningTime="2025-11-29 07:26:25.825875443 +0000 UTC m=+1525.447951501" Nov 29 07:26:26 crc kubenswrapper[4828]: W1129 07:26:26.299229 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09187fa9_5870_40f9_95eb_9397eeb0e400.slice/crio-c4d00654a896258f8230c2ad6f0478d57580a1f2b9536f513ee9c59b996159ed WatchSource:0}: Error finding container c4d00654a896258f8230c2ad6f0478d57580a1f2b9536f513ee9c59b996159ed: Status 404 returned error can't find the container with id c4d00654a896258f8230c2ad6f0478d57580a1f2b9536f513ee9c59b996159ed Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.315120 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.565230 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.808925 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerStarted","Data":"c4d00654a896258f8230c2ad6f0478d57580a1f2b9536f513ee9c59b996159ed"} Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.814522 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04bda42c-062d-483d-872e-bd260cf2b4b4","Type":"ContainerStarted","Data":"be661aa9053fea5320f2951e69d3e0f7bb6fe6394db9c12aeaed6ca50677cc09"} Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.819600 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3869e659-d33a-41bf-a89b-5cb222280fac","Type":"ContainerStarted","Data":"17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c"} Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.819644 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.819657 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3869e659-d33a-41bf-a89b-5cb222280fac","Type":"ContainerStarted","Data":"cdd8c3a3573eed2aa991f22d6f0dbc51f17eb56b5cfd2fa4f789b6b0ebf5ec40"} Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.847860 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.847835081 podStartE2EDuration="3.847835081s" podCreationTimestamp="2025-11-29 07:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:26.835598726 +0000 UTC m=+1526.457674804" watchObservedRunningTime="2025-11-29 07:26:26.847835081 +0000 UTC m=+1526.469911129" Nov 29 07:26:26 crc kubenswrapper[4828]: I1129 07:26:26.865610 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.865588737 podStartE2EDuration="2.865588737s" podCreationTimestamp="2025-11-29 07:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:26.85793698 +0000 UTC m=+1526.480013038" watchObservedRunningTime="2025-11-29 07:26:26.865588737 +0000 UTC m=+1526.487664795" Nov 29 07:26:27 crc kubenswrapper[4828]: I1129 07:26:27.830290 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerStarted","Data":"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a"} Nov 29 07:26:28 crc kubenswrapper[4828]: I1129 07:26:28.869597 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerStarted","Data":"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8"} Nov 29 07:26:28 crc kubenswrapper[4828]: I1129 07:26:28.870001 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerStarted","Data":"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58"} Nov 29 07:26:29 crc kubenswrapper[4828]: I1129 07:26:29.880713 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerID="48b1fe4b4404d06a0483ced4af3d0579a95c6349b8150262a655921a3cd362b2" exitCode=0 Nov 29 07:26:29 crc kubenswrapper[4828]: I1129 07:26:29.880771 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerDied","Data":"48b1fe4b4404d06a0483ced4af3d0579a95c6349b8150262a655921a3cd362b2"} Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.367944 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.490532 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs\") pod \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.490910 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzn9\" (UniqueName: \"kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9\") pod \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.491091 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config\") pod \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.491206 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle\") pod \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.491335 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config\") pod \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\" (UID: \"4d02fcd3-69b7-410c-8027-e36cbd5ae830\") " Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.497425 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4d02fcd3-69b7-410c-8027-e36cbd5ae830" (UID: "4d02fcd3-69b7-410c-8027-e36cbd5ae830"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.497432 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9" (OuterVolumeSpecName: "kube-api-access-jmzn9") pod "4d02fcd3-69b7-410c-8027-e36cbd5ae830" (UID: "4d02fcd3-69b7-410c-8027-e36cbd5ae830"). InnerVolumeSpecName "kube-api-access-jmzn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.541893 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config" (OuterVolumeSpecName: "config") pod "4d02fcd3-69b7-410c-8027-e36cbd5ae830" (UID: "4d02fcd3-69b7-410c-8027-e36cbd5ae830"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.545300 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d02fcd3-69b7-410c-8027-e36cbd5ae830" (UID: "4d02fcd3-69b7-410c-8027-e36cbd5ae830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.578586 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4d02fcd3-69b7-410c-8027-e36cbd5ae830" (UID: "4d02fcd3-69b7-410c-8027-e36cbd5ae830"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.595489 4828 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.595530 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzn9\" (UniqueName: \"kubernetes.io/projected/4d02fcd3-69b7-410c-8027-e36cbd5ae830-kube-api-access-jmzn9\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.595542 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.595551 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.595560 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4d02fcd3-69b7-410c-8027-e36cbd5ae830-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.892564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerStarted","Data":"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752"} Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.892659 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-central-agent" containerID="cri-o://e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a" gracePeriod=30 Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.892975 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.893154 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="proxy-httpd" containerID="cri-o://0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752" gracePeriod=30 Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.893234 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-notification-agent" containerID="cri-o://fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58" gracePeriod=30 Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.893291 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="sg-core" containerID="cri-o://f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8" gracePeriod=30 Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.898123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-578795589b-kkwlj" event={"ID":"4d02fcd3-69b7-410c-8027-e36cbd5ae830","Type":"ContainerDied","Data":"33b182b3b847a05a3ac52d55744e7a769412d04bbf3c5b0b4efdd0311777718e"} Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.898176 4828 scope.go:117] "RemoveContainer" containerID="1d1ab7c820a643ed12e14c32cd13c8701a07bd69cc44c2c1bcae8f4fea4343f0" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.898179 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-578795589b-kkwlj" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.927913 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.390042411 podStartE2EDuration="5.927888127s" podCreationTimestamp="2025-11-29 07:26:25 +0000 UTC" firstStartedPulling="2025-11-29 07:26:26.303092605 +0000 UTC m=+1525.925168653" lastFinishedPulling="2025-11-29 07:26:29.840938301 +0000 UTC m=+1529.463014369" observedRunningTime="2025-11-29 07:26:30.917155882 +0000 UTC m=+1530.539231940" watchObservedRunningTime="2025-11-29 07:26:30.927888127 +0000 UTC m=+1530.549964185" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.940636 4828 scope.go:117] "RemoveContainer" containerID="48b1fe4b4404d06a0483ced4af3d0579a95c6349b8150262a655921a3cd362b2" Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.960669 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:26:30 crc kubenswrapper[4828]: I1129 07:26:30.970374 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-578795589b-kkwlj"] Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.503044 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" path="/var/lib/kubelet/pods/4d02fcd3-69b7-410c-8027-e36cbd5ae830/volumes" Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.912161 4828 generic.go:334] "Generic (PLEG): container finished" podID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerID="0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752" exitCode=0 Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.913239 4828 generic.go:334] "Generic (PLEG): container finished" podID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerID="f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8" exitCode=2 Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.913371 4828 generic.go:334] "Generic (PLEG): container finished" podID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerID="fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58" exitCode=0 Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.912241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerDied","Data":"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752"} Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.913484 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerDied","Data":"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8"} Nov 29 07:26:31 crc kubenswrapper[4828]: I1129 07:26:31.913505 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerDied","Data":"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58"} Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.399022 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.399093 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.439011 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.467762 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.696793 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.817600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818079 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818148 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818176 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4lfm\" (UniqueName: \"kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818293 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818418 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818451 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.818484 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd\") pod \"09187fa9-5870-40f9-95eb-9397eeb0e400\" (UID: \"09187fa9-5870-40f9-95eb-9397eeb0e400\") " Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.819093 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.820400 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.820427 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09187fa9-5870-40f9-95eb-9397eeb0e400-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.823999 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts" (OuterVolumeSpecName: "scripts") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.824456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm" (OuterVolumeSpecName: "kube-api-access-v4lfm") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "kube-api-access-v4lfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.853246 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.906991 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.922152 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.922184 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4lfm\" (UniqueName: \"kubernetes.io/projected/09187fa9-5870-40f9-95eb-9397eeb0e400-kube-api-access-v4lfm\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.922195 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.922204 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.927581 4828 generic.go:334] "Generic (PLEG): container finished" podID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerID="e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a" exitCode=0 Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.927631 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.927649 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerDied","Data":"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a"} Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.928088 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09187fa9-5870-40f9-95eb-9397eeb0e400","Type":"ContainerDied","Data":"c4d00654a896258f8230c2ad6f0478d57580a1f2b9536f513ee9c59b996159ed"} Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.928246 4828 scope.go:117] "RemoveContainer" containerID="0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.929063 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.929098 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.950417 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data" (OuterVolumeSpecName: "config-data") pod "09187fa9-5870-40f9-95eb-9397eeb0e400" (UID: "09187fa9-5870-40f9-95eb-9397eeb0e400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.957197 4828 scope.go:117] "RemoveContainer" containerID="f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8" Nov 29 07:26:32 crc kubenswrapper[4828]: I1129 07:26:32.979811 4828 scope.go:117] "RemoveContainer" containerID="fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.013712 4828 scope.go:117] "RemoveContainer" containerID="e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.024734 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09187fa9-5870-40f9-95eb-9397eeb0e400-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.040962 4828 scope.go:117] "RemoveContainer" containerID="0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.041597 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752\": container with ID starting with 0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752 not found: ID does not exist" containerID="0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.041680 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752"} err="failed to get container status \"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752\": rpc error: code = NotFound desc = could not find container \"0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752\": container with ID starting with 0e5dd812a3b7d460294989aaed13467410d4ae9445101f5703921771c03d0752 not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.041734 4828 scope.go:117] "RemoveContainer" containerID="f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.042325 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8\": container with ID starting with f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8 not found: ID does not exist" containerID="f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.042372 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8"} err="failed to get container status \"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8\": rpc error: code = NotFound desc = could not find container \"f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8\": container with ID starting with f50f79da03102d76c4671b9cbbebe7079a461071d29a8afb68adf82997bc33b8 not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.042403 4828 scope.go:117] "RemoveContainer" containerID="fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.042990 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58\": container with ID starting with fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58 not found: ID does not exist" containerID="fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.043048 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58"} err="failed to get container status \"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58\": rpc error: code = NotFound desc = could not find container \"fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58\": container with ID starting with fb6306b303188316bfc9dc7c4126fa2e4d479750bcf3769f33fb57dc00d01f58 not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.043085 4828 scope.go:117] "RemoveContainer" containerID="e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.043757 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a\": container with ID starting with e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a not found: ID does not exist" containerID="e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.043795 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a"} err="failed to get container status \"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a\": rpc error: code = NotFound desc = could not find container \"e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a\": container with ID starting with e07c174b14b1062b6f6dd587db2e6a22a2bb5b235b78307ad9818ac23899f09a not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.265964 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.283102 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.321652 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327572 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="sg-core" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327635 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="sg-core" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327671 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-notification-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327685 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-notification-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327723 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="proxy-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327732 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="proxy-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327770 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-central-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327779 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-central-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327797 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327806 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: E1129 07:26:33.327834 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-api" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.327844 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-api" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328775 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="proxy-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328821 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-central-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328839 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-api" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328855 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="ceilometer-notification-agent" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328875 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d02fcd3-69b7-410c-8027-e36cbd5ae830" containerName="neutron-httpd" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.328898 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" containerName="sg-core" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.333141 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.341791 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.343243 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.344052 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.426097 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09187fa9-5870-40f9-95eb-9397eeb0e400" path="/var/lib/kubelet/pods/09187fa9-5870-40f9-95eb-9397eeb0e400/volumes" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.434902 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435009 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435040 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435074 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435164 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphfm\" (UniqueName: \"kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435255 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.435344 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.537966 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.538083 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.538121 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.538153 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.539097 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.539181 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.539791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rphfm\" (UniqueName: \"kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.539877 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.539964 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.543773 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.543801 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.544694 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.545981 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.558865 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphfm\" (UniqueName: \"kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm\") pod \"ceilometer-0\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " pod="openstack/ceilometer-0" Nov 29 07:26:33 crc kubenswrapper[4828]: I1129 07:26:33.693155 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.104645 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.104959 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.144032 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.155450 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.224454 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.954698 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerStarted","Data":"d117b65ebb5ad30bf82dbbbaf52b7d2ff919cbd472a6afc2f46e9f57cdf38825"} Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.955058 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:34 crc kubenswrapper[4828]: I1129 07:26:34.955076 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:35 crc kubenswrapper[4828]: I1129 07:26:35.244063 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:35 crc kubenswrapper[4828]: I1129 07:26:35.289969 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:26:35 crc kubenswrapper[4828]: I1129 07:26:35.290128 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:35 crc kubenswrapper[4828]: I1129 07:26:35.317870 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:26:35 crc kubenswrapper[4828]: I1129 07:26:35.965170 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerStarted","Data":"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a"} Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.389134 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m8ph8"] Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.390719 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.394709 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.396052 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.412263 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8ph8"] Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.511709 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.511814 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmf7\" (UniqueName: \"kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.511969 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.512034 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.514797 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrdgm"] Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.517092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.525634 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.525997 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.583599 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrdgm"] Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613169 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613229 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613494 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613533 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613754 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.613923 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.614055 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvbx6\" (UniqueName: \"kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.614205 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmmf7\" (UniqueName: \"kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.622307 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.622817 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.636880 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.637703 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmmf7\" (UniqueName: \"kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7\") pod \"nova-cell0-cell-mapping-m8ph8\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.711841 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.716145 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.716277 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.716313 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvbx6\" (UniqueName: \"kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.716352 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.721305 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.721989 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.730790 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.733468 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvbx6\" (UniqueName: \"kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6\") pod \"nova-cell1-conductor-db-sync-mrdgm\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:36 crc kubenswrapper[4828]: I1129 07:26:36.912085 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.039221 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.039258 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.143783 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.144901 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.156459 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.243034 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.250723 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.262105 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.314063 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328000 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crz6\" (UniqueName: \"kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328308 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328438 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndc8\" (UniqueName: \"kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328589 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328792 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.328984 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.329407 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.359921 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.408038 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.409386 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.416763 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.431084 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.431752 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.431860 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.431956 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrv4p\" (UniqueName: \"kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432066 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crz6\" (UniqueName: \"kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432167 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432325 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndc8\" (UniqueName: \"kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432485 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432598 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.432693 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.443544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.444134 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.448638 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.469018 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.470241 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndc8\" (UniqueName: \"kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.471168 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.477976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crz6\" (UniqueName: \"kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6\") pod \"nova-scheduler-0\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.510358 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.535374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.535460 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrv4p\" (UniqueName: \"kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.535583 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.544083 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.544166 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.546347 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.549713 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.552538 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.559883 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrv4p\" (UniqueName: \"kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.566992 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.598369 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.609352 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8ph8"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.619431 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.622366 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.624282 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.636414 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739070 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739395 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkrh\" (UniqueName: \"kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739423 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739447 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739480 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739517 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9b9\" (UniqueName: \"kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739578 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739597 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739633 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.739670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.785722 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.841921 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.841996 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842036 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842109 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842135 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkrh\" (UniqueName: \"kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842157 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842179 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842222 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842259 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9b9\" (UniqueName: \"kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.842333 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.843539 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.843628 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.843719 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.844136 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.844146 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.845152 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.849483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.855373 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.885398 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9b9\" (UniqueName: \"kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9\") pod \"dnsmasq-dns-5fbc4d444f-kwkll\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.886327 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkrh\" (UniqueName: \"kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh\") pod \"nova-metadata-0\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " pod="openstack/nova-metadata-0" Nov 29 07:26:37 crc kubenswrapper[4828]: I1129 07:26:37.975623 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrdgm"] Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.044028 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.068256 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerStarted","Data":"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c"} Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.070339 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" event={"ID":"38b2334c-7b03-45cb-a780-0b40f0bc7bc3","Type":"ContainerStarted","Data":"e607f0bb2dbf1533087a3c30f69a1495366820cc5d82e03ceba034a7c0f99d5c"} Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.075232 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8ph8" event={"ID":"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142","Type":"ContainerStarted","Data":"aa849c3956980aa45ab543f21f740802f03954a9d2b3c987ab39a32d6c23d420"} Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.105912 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.308925 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:26:38 crc kubenswrapper[4828]: W1129 07:26:38.315983 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e45763_9f9d_4ce2_adc6_2f85184fefd4.slice/crio-ba5e3d0ccf38d2368771f88c1e2565b497d30fe1c511ca93f83060f5e33b42ff WatchSource:0}: Error finding container ba5e3d0ccf38d2368771f88c1e2565b497d30fe1c511ca93f83060f5e33b42ff: Status 404 returned error can't find the container with id ba5e3d0ccf38d2368771f88c1e2565b497d30fe1c511ca93f83060f5e33b42ff Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.330680 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.666874 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.799322 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.799453 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:38 crc kubenswrapper[4828]: I1129 07:26:38.804946 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.107912 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.125905 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerStarted","Data":"639ae04cb58d55b2baa700b235a787bbf0ce6bb4886efe8e60e7f7ed702bd629"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.132925 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" event={"ID":"38b2334c-7b03-45cb-a780-0b40f0bc7bc3","Type":"ContainerStarted","Data":"f9109334675860596cda3df54df7d97b62ebe78cb7f57c8b69ca82ccbdbe22ca"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.172111 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8ph8" event={"ID":"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142","Type":"ContainerStarted","Data":"0fcda68522ace4df96adfb4055bd056070a8135b9d1cd76c3c134638f9384f68"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.180517 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e45763-9f9d-4ce2-adc6-2f85184fefd4","Type":"ContainerStarted","Data":"ba5e3d0ccf38d2368771f88c1e2565b497d30fe1c511ca93f83060f5e33b42ff"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.190144 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.191521 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" podStartSLOduration=3.191482459 podStartE2EDuration="3.191482459s" podCreationTimestamp="2025-11-29 07:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:39.165119732 +0000 UTC m=+1538.787195800" watchObservedRunningTime="2025-11-29 07:26:39.191482459 +0000 UTC m=+1538.813558517" Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.203624 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f","Type":"ContainerStarted","Data":"b40085dcb7981a955e92e289971f73cc4deebba1132f21fcf138665f5365de6a"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.209857 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerStarted","Data":"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53"} Nov 29 07:26:39 crc kubenswrapper[4828]: I1129 07:26:39.218978 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m8ph8" podStartSLOduration=3.218956735 podStartE2EDuration="3.218956735s" podCreationTimestamp="2025-11-29 07:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:39.191363026 +0000 UTC m=+1538.813439084" watchObservedRunningTime="2025-11-29 07:26:39.218956735 +0000 UTC m=+1538.841032793" Nov 29 07:26:40 crc kubenswrapper[4828]: I1129 07:26:40.271690 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerStarted","Data":"bd49e633e8e63b932f11e6502e7bc7103e8a8831d5eba6c5eb0080540b775aab"} Nov 29 07:26:40 crc kubenswrapper[4828]: I1129 07:26:40.287085 4828 generic.go:334] "Generic (PLEG): container finished" podID="79e77aa1-bd34-4449-9880-10c2160b044b" containerID="d3544e11c20f3606c8b099b1f8c9b00efeb66a5d69637e5bf8a6684b0bb5c41c" exitCode=0 Nov 29 07:26:40 crc kubenswrapper[4828]: I1129 07:26:40.287239 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" event={"ID":"79e77aa1-bd34-4449-9880-10c2160b044b","Type":"ContainerDied","Data":"d3544e11c20f3606c8b099b1f8c9b00efeb66a5d69637e5bf8a6684b0bb5c41c"} Nov 29 07:26:40 crc kubenswrapper[4828]: I1129 07:26:40.287390 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" event={"ID":"79e77aa1-bd34-4449-9880-10c2160b044b","Type":"ContainerStarted","Data":"b2d1c3495dedbb256a51b853cabe50a4ca64bcd930fe81ad0123c5e6fc806f3a"} Nov 29 07:26:40 crc kubenswrapper[4828]: I1129 07:26:40.991365 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.005363 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.325477 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerStarted","Data":"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac"} Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.326700 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.365461 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.427844083 podStartE2EDuration="8.365441764s" podCreationTimestamp="2025-11-29 07:26:33 +0000 UTC" firstStartedPulling="2025-11-29 07:26:34.226170969 +0000 UTC m=+1533.848247027" lastFinishedPulling="2025-11-29 07:26:40.16376865 +0000 UTC m=+1539.785844708" observedRunningTime="2025-11-29 07:26:41.354435542 +0000 UTC m=+1540.976511610" watchObservedRunningTime="2025-11-29 07:26:41.365441764 +0000 UTC m=+1540.987517812" Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.486610 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.486665 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.667320 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.667713 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" containerName="nova-cell0-conductor-conductor" containerID="cri-o://17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" gracePeriod=30 Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.684146 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:41 crc kubenswrapper[4828]: I1129 07:26:41.707194 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:26:42 crc kubenswrapper[4828]: I1129 07:26:42.350455 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" event={"ID":"79e77aa1-bd34-4449-9880-10c2160b044b","Type":"ContainerStarted","Data":"902456cd170b0b1c264068107fd6b8a3fdac983b87c0191b130022eafcce2f67"} Nov 29 07:26:42 crc kubenswrapper[4828]: I1129 07:26:42.350855 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:42 crc kubenswrapper[4828]: I1129 07:26:42.378286 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" podStartSLOduration=5.378244076 podStartE2EDuration="5.378244076s" podCreationTimestamp="2025-11-29 07:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:42.370822315 +0000 UTC m=+1541.992898383" watchObservedRunningTime="2025-11-29 07:26:42.378244076 +0000 UTC m=+1542.000320134" Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.371348 4828 generic.go:334] "Generic (PLEG): container finished" podID="3869e659-d33a-41bf-a89b-5cb222280fac" containerID="17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" exitCode=0 Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.371423 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3869e659-d33a-41bf-a89b-5cb222280fac","Type":"ContainerDied","Data":"17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c"} Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.951225 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.952380 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="sg-core" containerID="cri-o://f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53" gracePeriod=30 Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.952427 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-notification-agent" containerID="cri-o://6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c" gracePeriod=30 Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.952462 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="proxy-httpd" containerID="cri-o://616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac" gracePeriod=30 Nov 29 07:26:44 crc kubenswrapper[4828]: I1129 07:26:44.952335 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-central-agent" containerID="cri-o://2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a" gracePeriod=30 Nov 29 07:26:45 crc kubenswrapper[4828]: E1129 07:26:45.186946 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c is running failed: container process not found" containerID="17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 29 07:26:45 crc kubenswrapper[4828]: E1129 07:26:45.187453 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c is running failed: container process not found" containerID="17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 29 07:26:45 crc kubenswrapper[4828]: E1129 07:26:45.187776 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c is running failed: container process not found" containerID="17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 29 07:26:45 crc kubenswrapper[4828]: E1129 07:26:45.187868 4828 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" containerName="nova-cell0-conductor-conductor" Nov 29 07:26:45 crc kubenswrapper[4828]: I1129 07:26:45.383447 4828 generic.go:334] "Generic (PLEG): container finished" podID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerID="616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac" exitCode=0 Nov 29 07:26:45 crc kubenswrapper[4828]: I1129 07:26:45.383484 4828 generic.go:334] "Generic (PLEG): container finished" podID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerID="f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53" exitCode=2 Nov 29 07:26:45 crc kubenswrapper[4828]: I1129 07:26:45.383497 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerDied","Data":"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac"} Nov 29 07:26:45 crc kubenswrapper[4828]: I1129 07:26:45.383545 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerDied","Data":"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53"} Nov 29 07:26:46 crc kubenswrapper[4828]: I1129 07:26:46.396642 4828 generic.go:334] "Generic (PLEG): container finished" podID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerID="6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c" exitCode=0 Nov 29 07:26:46 crc kubenswrapper[4828]: I1129 07:26:46.396721 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerDied","Data":"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.025785 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.046476 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.135373 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.135613 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="dnsmasq-dns" containerID="cri-o://ff8dff7a3a3039430c2fbe5affc49f684bb3373fc89ad9a5f0a610e68f26b498" gracePeriod=10 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.157435 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle\") pod \"3869e659-d33a-41bf-a89b-5cb222280fac\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.157475 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data\") pod \"3869e659-d33a-41bf-a89b-5cb222280fac\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.157600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq2v9\" (UniqueName: \"kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9\") pod \"3869e659-d33a-41bf-a89b-5cb222280fac\" (UID: \"3869e659-d33a-41bf-a89b-5cb222280fac\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.178169 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9" (OuterVolumeSpecName: "kube-api-access-xq2v9") pod "3869e659-d33a-41bf-a89b-5cb222280fac" (UID: "3869e659-d33a-41bf-a89b-5cb222280fac"). InnerVolumeSpecName "kube-api-access-xq2v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.240426 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data" (OuterVolumeSpecName: "config-data") pod "3869e659-d33a-41bf-a89b-5cb222280fac" (UID: "3869e659-d33a-41bf-a89b-5cb222280fac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.259217 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3869e659-d33a-41bf-a89b-5cb222280fac" (UID: "3869e659-d33a-41bf-a89b-5cb222280fac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.260741 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq2v9\" (UniqueName: \"kubernetes.io/projected/3869e659-d33a-41bf-a89b-5cb222280fac-kube-api-access-xq2v9\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.260869 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.260976 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3869e659-d33a-41bf-a89b-5cb222280fac-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.441641 4828 generic.go:334] "Generic (PLEG): container finished" podID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerID="ff8dff7a3a3039430c2fbe5affc49f684bb3373fc89ad9a5f0a610e68f26b498" exitCode=0 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.441994 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerDied","Data":"ff8dff7a3a3039430c2fbe5affc49f684bb3373fc89ad9a5f0a610e68f26b498"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.444402 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f","Type":"ContainerStarted","Data":"f0dfdc647e462852c4bb506b4b2a2b6dd3764f0a0b45c8e722c325f30782b689"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.445387 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f0dfdc647e462852c4bb506b4b2a2b6dd3764f0a0b45c8e722c325f30782b689" gracePeriod=30 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.447035 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3869e659-d33a-41bf-a89b-5cb222280fac","Type":"ContainerDied","Data":"cdd8c3a3573eed2aa991f22d6f0dbc51f17eb56b5cfd2fa4f789b6b0ebf5ec40"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.447072 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.447099 4828 scope.go:117] "RemoveContainer" containerID="17b66ff702cf6860a6731e034cf1c9c17167e9540d80960eaef59a0430ebc39c" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.450792 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerStarted","Data":"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.452278 4828 generic.go:334] "Generic (PLEG): container finished" podID="a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" containerID="0fcda68522ace4df96adfb4055bd056070a8135b9d1cd76c3c134638f9384f68" exitCode=0 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.452329 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8ph8" event={"ID":"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142","Type":"ContainerDied","Data":"0fcda68522ace4df96adfb4055bd056070a8135b9d1cd76c3c134638f9384f68"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.467153 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.662756508 podStartE2EDuration="11.467130664s" podCreationTimestamp="2025-11-29 07:26:37 +0000 UTC" firstStartedPulling="2025-11-29 07:26:38.701287796 +0000 UTC m=+1538.323363844" lastFinishedPulling="2025-11-29 07:26:47.505661942 +0000 UTC m=+1547.127738000" observedRunningTime="2025-11-29 07:26:48.460767921 +0000 UTC m=+1548.082843979" watchObservedRunningTime="2025-11-29 07:26:48.467130664 +0000 UTC m=+1548.089206722" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.471864 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" containerName="nova-scheduler-scheduler" containerID="cri-o://a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0" gracePeriod=30 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.472191 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e45763-9f9d-4ce2-adc6-2f85184fefd4","Type":"ContainerStarted","Data":"a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.494449 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerStarted","Data":"2fd1a9fe6de9eed5521a75874efebf750fd9d478f8f873ec097057bfcfe5ea1a"} Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.494619 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-log" containerID="cri-o://2fd1a9fe6de9eed5521a75874efebf750fd9d478f8f873ec097057bfcfe5ea1a" gracePeriod=30 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.494871 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-metadata" containerID="cri-o://693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d" gracePeriod=30 Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.522822 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.382790986 podStartE2EDuration="11.522801895s" podCreationTimestamp="2025-11-29 07:26:37 +0000 UTC" firstStartedPulling="2025-11-29 07:26:38.350946925 +0000 UTC m=+1537.973022983" lastFinishedPulling="2025-11-29 07:26:47.490957834 +0000 UTC m=+1547.113033892" observedRunningTime="2025-11-29 07:26:48.516905423 +0000 UTC m=+1548.138981501" watchObservedRunningTime="2025-11-29 07:26:48.522801895 +0000 UTC m=+1548.144877953" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.607857 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.623520 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.637343 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:48 crc kubenswrapper[4828]: E1129 07:26:48.637854 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" containerName="nova-cell0-conductor-conductor" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.637880 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" containerName="nova-cell0-conductor-conductor" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.638131 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" containerName="nova-cell0-conductor-conductor" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.638828 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.642191 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.643602 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.301305873 podStartE2EDuration="11.643560347s" podCreationTimestamp="2025-11-29 07:26:37 +0000 UTC" firstStartedPulling="2025-11-29 07:26:39.20553992 +0000 UTC m=+1538.827615978" lastFinishedPulling="2025-11-29 07:26:47.547794394 +0000 UTC m=+1547.169870452" observedRunningTime="2025-11-29 07:26:48.579467811 +0000 UTC m=+1548.201543899" watchObservedRunningTime="2025-11-29 07:26:48.643560347 +0000 UTC m=+1548.265636405" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.665640 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.681410 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772577 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47cst\" (UniqueName: \"kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772625 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772676 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772767 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772887 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.772960 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb\") pod \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\" (UID: \"40fa68bc-11d6-4b01-b6ec-b3839e003d8c\") " Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.773229 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrtxl\" (UniqueName: \"kubernetes.io/projected/ff3c67db-7084-4abe-94f3-aafca06ae5e3-kube-api-access-jrtxl\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.773289 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.773320 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.781459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst" (OuterVolumeSpecName: "kube-api-access-47cst") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "kube-api-access-47cst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.839297 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.860194 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875253 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrtxl\" (UniqueName: \"kubernetes.io/projected/ff3c67db-7084-4abe-94f3-aafca06ae5e3-kube-api-access-jrtxl\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875397 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875492 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875504 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47cst\" (UniqueName: \"kubernetes.io/projected/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-kube-api-access-47cst\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.875513 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.877389 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config" (OuterVolumeSpecName: "config") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.878986 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.886382 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.886649 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3c67db-7084-4abe-94f3-aafca06ae5e3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.892607 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40fa68bc-11d6-4b01-b6ec-b3839e003d8c" (UID: "40fa68bc-11d6-4b01-b6ec-b3839e003d8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.898608 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrtxl\" (UniqueName: \"kubernetes.io/projected/ff3c67db-7084-4abe-94f3-aafca06ae5e3-kube-api-access-jrtxl\") pod \"nova-cell0-conductor-0\" (UID: \"ff3c67db-7084-4abe-94f3-aafca06ae5e3\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.963921 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.977181 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.977226 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:48 crc kubenswrapper[4828]: I1129 07:26:48.977238 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40fa68bc-11d6-4b01-b6ec-b3839e003d8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.426935 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3869e659-d33a-41bf-a89b-5cb222280fac" path="/var/lib/kubelet/pods/3869e659-d33a-41bf-a89b-5cb222280fac/volumes" Nov 29 07:26:49 crc kubenswrapper[4828]: W1129 07:26:49.448049 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff3c67db_7084_4abe_94f3_aafca06ae5e3.slice/crio-7aa4946e2a22dcc6af3be52a029655c80dbc1edb5428c3d693ede00d43467860 WatchSource:0}: Error finding container 7aa4946e2a22dcc6af3be52a029655c80dbc1edb5428c3d693ede00d43467860: Status 404 returned error can't find the container with id 7aa4946e2a22dcc6af3be52a029655c80dbc1edb5428c3d693ede00d43467860 Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.448738 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.525208 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerStarted","Data":"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea"} Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.525457 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-log" containerID="cri-o://603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" gracePeriod=30 Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.525713 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-api" containerID="cri-o://8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" gracePeriod=30 Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.533138 4828 generic.go:334] "Generic (PLEG): container finished" podID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerID="2fd1a9fe6de9eed5521a75874efebf750fd9d478f8f873ec097057bfcfe5ea1a" exitCode=143 Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.533238 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerDied","Data":"2fd1a9fe6de9eed5521a75874efebf750fd9d478f8f873ec097057bfcfe5ea1a"} Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.533323 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerStarted","Data":"693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d"} Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.544819 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" event={"ID":"40fa68bc-11d6-4b01-b6ec-b3839e003d8c","Type":"ContainerDied","Data":"be4c0499542808795b84d8632b524298c7c6cd7fdda0a12c584cba3218416fc5"} Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.544837 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-klssl" Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.544911 4828 scope.go:117] "RemoveContainer" containerID="ff8dff7a3a3039430c2fbe5affc49f684bb3373fc89ad9a5f0a610e68f26b498" Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.550190 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff3c67db-7084-4abe-94f3-aafca06ae5e3","Type":"ContainerStarted","Data":"7aa4946e2a22dcc6af3be52a029655c80dbc1edb5428c3d693ede00d43467860"} Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.550712 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.385357145 podStartE2EDuration="12.550683264s" podCreationTimestamp="2025-11-29 07:26:37 +0000 UTC" firstStartedPulling="2025-11-29 07:26:38.36983393 +0000 UTC m=+1537.991909988" lastFinishedPulling="2025-11-29 07:26:47.535160049 +0000 UTC m=+1547.157236107" observedRunningTime="2025-11-29 07:26:49.548552249 +0000 UTC m=+1549.170628307" watchObservedRunningTime="2025-11-29 07:26:49.550683264 +0000 UTC m=+1549.172759322" Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.587963 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.589771 4828 scope.go:117] "RemoveContainer" containerID="cede101ca1b90f11b3bdc12e9982c06dd91a9d316ba82787ff23f08ee0b5eecc" Nov 29 07:26:49 crc kubenswrapper[4828]: I1129 07:26:49.609806 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-klssl"] Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.021631 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.108716 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts\") pod \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.108817 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle\") pod \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.108886 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data\") pod \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.108973 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmmf7\" (UniqueName: \"kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7\") pod \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\" (UID: \"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.124435 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts" (OuterVolumeSpecName: "scripts") pod "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" (UID: "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.130507 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7" (OuterVolumeSpecName: "kube-api-access-kmmf7") pod "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" (UID: "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142"). InnerVolumeSpecName "kube-api-access-kmmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.186653 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data" (OuterVolumeSpecName: "config-data") pod "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" (UID: "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.215102 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.215155 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmmf7\" (UniqueName: \"kubernetes.io/projected/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-kube-api-access-kmmf7\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.215168 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.233506 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" (UID: "a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.240287 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.318619 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.419982 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs\") pod \"b29e1b1d-2985-4461-b475-e6617923722e\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.420222 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data\") pod \"b29e1b1d-2985-4461-b475-e6617923722e\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.420281 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle\") pod \"b29e1b1d-2985-4461-b475-e6617923722e\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.420359 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndc8\" (UniqueName: \"kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8\") pod \"b29e1b1d-2985-4461-b475-e6617923722e\" (UID: \"b29e1b1d-2985-4461-b475-e6617923722e\") " Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.420459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs" (OuterVolumeSpecName: "logs") pod "b29e1b1d-2985-4461-b475-e6617923722e" (UID: "b29e1b1d-2985-4461-b475-e6617923722e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.420810 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29e1b1d-2985-4461-b475-e6617923722e-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.427522 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8" (OuterVolumeSpecName: "kube-api-access-rndc8") pod "b29e1b1d-2985-4461-b475-e6617923722e" (UID: "b29e1b1d-2985-4461-b475-e6617923722e"). InnerVolumeSpecName "kube-api-access-rndc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.449534 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b29e1b1d-2985-4461-b475-e6617923722e" (UID: "b29e1b1d-2985-4461-b475-e6617923722e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.456168 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data" (OuterVolumeSpecName: "config-data") pod "b29e1b1d-2985-4461-b475-e6617923722e" (UID: "b29e1b1d-2985-4461-b475-e6617923722e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.522830 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndc8\" (UniqueName: \"kubernetes.io/projected/b29e1b1d-2985-4461-b475-e6617923722e-kube-api-access-rndc8\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.522867 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.522877 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29e1b1d-2985-4461-b475-e6617923722e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.569681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff3c67db-7084-4abe-94f3-aafca06ae5e3","Type":"ContainerStarted","Data":"945c06fb2def5a5161e416eb6a5d9bf16bf688c8cfdf60e50ae183afd6714333"} Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.570023 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.571808 4828 generic.go:334] "Generic (PLEG): container finished" podID="b29e1b1d-2985-4461-b475-e6617923722e" containerID="8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" exitCode=0 Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.571920 4828 generic.go:334] "Generic (PLEG): container finished" podID="b29e1b1d-2985-4461-b475-e6617923722e" containerID="603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" exitCode=143 Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.571951 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerDied","Data":"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea"} Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.572090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerDied","Data":"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d"} Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.572152 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29e1b1d-2985-4461-b475-e6617923722e","Type":"ContainerDied","Data":"639ae04cb58d55b2baa700b235a787bbf0ce6bb4886efe8e60e7f7ed702bd629"} Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.572110 4828 scope.go:117] "RemoveContainer" containerID="8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.572030 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.574364 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8ph8" event={"ID":"a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142","Type":"ContainerDied","Data":"aa849c3956980aa45ab543f21f740802f03954a9d2b3c987ab39a32d6c23d420"} Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.574415 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa849c3956980aa45ab543f21f740802f03954a9d2b3c987ab39a32d6c23d420" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.574483 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8ph8" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.596077 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.596053491 podStartE2EDuration="2.596053491s" podCreationTimestamp="2025-11-29 07:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:50.59173274 +0000 UTC m=+1550.213808878" watchObservedRunningTime="2025-11-29 07:26:50.596053491 +0000 UTC m=+1550.218129549" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.602604 4828 scope.go:117] "RemoveContainer" containerID="603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.632888 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.637576 4828 scope.go:117] "RemoveContainer" containerID="8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.637985 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea\": container with ID starting with 8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea not found: ID does not exist" containerID="8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.638020 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea"} err="failed to get container status \"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea\": rpc error: code = NotFound desc = could not find container \"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea\": container with ID starting with 8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.638047 4828 scope.go:117] "RemoveContainer" containerID="603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.638322 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d\": container with ID starting with 603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d not found: ID does not exist" containerID="603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.638344 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d"} err="failed to get container status \"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d\": rpc error: code = NotFound desc = could not find container \"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d\": container with ID starting with 603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.638360 4828 scope.go:117] "RemoveContainer" containerID="8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.639111 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea"} err="failed to get container status \"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea\": rpc error: code = NotFound desc = could not find container \"8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea\": container with ID starting with 8c0a4a96e4e13f81fbcf14d9fd314d123f1b3f8092cb0c3e859e56dd040f32ea not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.639138 4828 scope.go:117] "RemoveContainer" containerID="603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.639411 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d"} err="failed to get container status \"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d\": rpc error: code = NotFound desc = could not find container \"603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d\": container with ID starting with 603ddf495c3a3815ec13524ba9dbd67747b3f435076222970e335e1c11f9793d not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.655180 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.663401 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.663881 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-log" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.663905 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-log" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.663919 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-api" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.663926 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-api" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.663946 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" containerName="nova-manage" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.663954 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" containerName="nova-manage" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.663970 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.663977 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4828]: E1129 07:26:50.664005 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="init" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.664012 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="init" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.664239 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-api" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.664290 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" containerName="nova-manage" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.664300 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29e1b1d-2985-4461-b475-e6617923722e" containerName="nova-api-log" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.664318 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.665498 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.670823 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.678665 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.779571 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.779642 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh8p\" (UniqueName: \"kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.779679 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.779708 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.881303 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.881370 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.881557 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.881595 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh8p\" (UniqueName: \"kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.882463 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.887014 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.888846 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:50 crc kubenswrapper[4828]: I1129 07:26:50.901336 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh8p\" (UniqueName: \"kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p\") pod \"nova-api-0\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " pod="openstack/nova-api-0" Nov 29 07:26:51 crc kubenswrapper[4828]: I1129 07:26:51.162096 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:26:51 crc kubenswrapper[4828]: I1129 07:26:51.427041 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fa68bc-11d6-4b01-b6ec-b3839e003d8c" path="/var/lib/kubelet/pods/40fa68bc-11d6-4b01-b6ec-b3839e003d8c/volumes" Nov 29 07:26:51 crc kubenswrapper[4828]: I1129 07:26:51.427926 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29e1b1d-2985-4461-b475-e6617923722e" path="/var/lib/kubelet/pods/b29e1b1d-2985-4461-b475-e6617923722e/volumes" Nov 29 07:26:51 crc kubenswrapper[4828]: I1129 07:26:51.635211 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.553183 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.600604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerStarted","Data":"f753196bef6670b63d82e607d64c67a88fc6029114d886d2443f936f7987240f"} Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.600657 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerStarted","Data":"d5ae66dd6b7791b2ea7d247fd3207cd9f29f733af7b0707bb82333b371ac8950"} Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.600668 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerStarted","Data":"7f2e083f813df2651e718ddefdb4b1ce971cc434255d12a009e1f6ff1fe6a2c6"} Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.619529 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.61950529 podStartE2EDuration="2.61950529s" podCreationTimestamp="2025-11-29 07:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:52.617616101 +0000 UTC m=+1552.239692159" watchObservedRunningTime="2025-11-29 07:26:52.61950529 +0000 UTC m=+1552.241581348" Nov 29 07:26:52 crc kubenswrapper[4828]: I1129 07:26:52.787169 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.106337 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.106672 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.358954 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428649 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428805 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428851 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rphfm\" (UniqueName: \"kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428907 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428955 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.428984 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.429078 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle\") pod \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\" (UID: \"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c\") " Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.429782 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.430136 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.435346 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm" (OuterVolumeSpecName: "kube-api-access-rphfm") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "kube-api-access-rphfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.436033 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts" (OuterVolumeSpecName: "scripts") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.462441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.517448 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531600 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531651 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rphfm\" (UniqueName: \"kubernetes.io/projected/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-kube-api-access-rphfm\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531668 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531681 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531692 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.531703 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.550969 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data" (OuterVolumeSpecName: "config-data") pod "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" (UID: "d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.615260 4828 generic.go:334] "Generic (PLEG): container finished" podID="38b2334c-7b03-45cb-a780-0b40f0bc7bc3" containerID="f9109334675860596cda3df54df7d97b62ebe78cb7f57c8b69ca82ccbdbe22ca" exitCode=0 Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.615382 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" event={"ID":"38b2334c-7b03-45cb-a780-0b40f0bc7bc3","Type":"ContainerDied","Data":"f9109334675860596cda3df54df7d97b62ebe78cb7f57c8b69ca82ccbdbe22ca"} Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.621333 4828 generic.go:334] "Generic (PLEG): container finished" podID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerID="2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a" exitCode=0 Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.622691 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.626351 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerDied","Data":"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a"} Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.626518 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c","Type":"ContainerDied","Data":"d117b65ebb5ad30bf82dbbbaf52b7d2ff919cbd472a6afc2f46e9f57cdf38825"} Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.626557 4828 scope.go:117] "RemoveContainer" containerID="616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.636479 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.656757 4828 scope.go:117] "RemoveContainer" containerID="f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.676322 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.688058 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.689663 4828 scope.go:117] "RemoveContainer" containerID="6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.709135 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.710092 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="proxy-httpd" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.710116 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="proxy-httpd" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.710130 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="sg-core" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.710138 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="sg-core" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.710165 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-central-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.710173 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-central-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.710187 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-notification-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.710193 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-notification-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.712082 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-central-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.712130 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="proxy-httpd" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.712141 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="ceilometer-notification-agent" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.712159 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" containerName="sg-core" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.716594 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.723459 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.723562 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.732402 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.747226 4828 scope.go:117] "RemoveContainer" containerID="2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.776569 4828 scope.go:117] "RemoveContainer" containerID="616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.777043 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac\": container with ID starting with 616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac not found: ID does not exist" containerID="616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.777104 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac"} err="failed to get container status \"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac\": rpc error: code = NotFound desc = could not find container \"616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac\": container with ID starting with 616abcb3a818ba9c8085e9d13dbcdb3176d5b8fe5d1a616cd7223fa5a21a75ac not found: ID does not exist" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.777141 4828 scope.go:117] "RemoveContainer" containerID="f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.777732 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53\": container with ID starting with f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53 not found: ID does not exist" containerID="f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.777770 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53"} err="failed to get container status \"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53\": rpc error: code = NotFound desc = could not find container \"f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53\": container with ID starting with f1008e5e9e01ff7145e995542cf801c3538a834c4f3b1d91526a1ae6ef22cf53 not found: ID does not exist" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.777793 4828 scope.go:117] "RemoveContainer" containerID="6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.778082 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c\": container with ID starting with 6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c not found: ID does not exist" containerID="6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.778121 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c"} err="failed to get container status \"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c\": rpc error: code = NotFound desc = could not find container \"6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c\": container with ID starting with 6f65d0e98556eb99cd4ae5fac10a03f615e42afd806371e30433535660d78d7c not found: ID does not exist" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.778141 4828 scope.go:117] "RemoveContainer" containerID="2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a" Nov 29 07:26:53 crc kubenswrapper[4828]: E1129 07:26:53.778489 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a\": container with ID starting with 2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a not found: ID does not exist" containerID="2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.778526 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a"} err="failed to get container status \"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a\": rpc error: code = NotFound desc = could not find container \"2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a\": container with ID starting with 2c1db44a7dec13b832e01f0db052b823c2433a651cc05ea6f435b434575a732a not found: ID does not exist" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.840736 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.840882 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.841396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw4d\" (UniqueName: \"kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.841513 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.841539 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.841560 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.841591 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942725 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppw4d\" (UniqueName: \"kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942887 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942941 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.942966 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.943020 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.944210 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.944313 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.949533 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.950017 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.950124 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.954771 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:53 crc kubenswrapper[4828]: I1129 07:26:53.961748 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppw4d\" (UniqueName: \"kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d\") pod \"ceilometer-0\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " pod="openstack/ceilometer-0" Nov 29 07:26:54 crc kubenswrapper[4828]: I1129 07:26:54.040682 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:54 crc kubenswrapper[4828]: W1129 07:26:54.502799 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04f457b1_ae28_4750_8777_9cd632aa4678.slice/crio-ebddff6fea5825605db696f8c19642851a4c4386eee7f7d6b717140569314ada WatchSource:0}: Error finding container ebddff6fea5825605db696f8c19642851a4c4386eee7f7d6b717140569314ada: Status 404 returned error can't find the container with id ebddff6fea5825605db696f8c19642851a4c4386eee7f7d6b717140569314ada Nov 29 07:26:54 crc kubenswrapper[4828]: I1129 07:26:54.506180 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:54 crc kubenswrapper[4828]: I1129 07:26:54.634199 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerStarted","Data":"ebddff6fea5825605db696f8c19642851a4c4386eee7f7d6b717140569314ada"} Nov 29 07:26:54 crc kubenswrapper[4828]: I1129 07:26:54.987198 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.069867 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle\") pod \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.115989 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b2334c-7b03-45cb-a780-0b40f0bc7bc3" (UID: "38b2334c-7b03-45cb-a780-0b40f0bc7bc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.171809 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvbx6\" (UniqueName: \"kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6\") pod \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.171898 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts\") pod \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.171940 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data\") pod \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\" (UID: \"38b2334c-7b03-45cb-a780-0b40f0bc7bc3\") " Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.172523 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.175920 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts" (OuterVolumeSpecName: "scripts") pod "38b2334c-7b03-45cb-a780-0b40f0bc7bc3" (UID: "38b2334c-7b03-45cb-a780-0b40f0bc7bc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.175948 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6" (OuterVolumeSpecName: "kube-api-access-cvbx6") pod "38b2334c-7b03-45cb-a780-0b40f0bc7bc3" (UID: "38b2334c-7b03-45cb-a780-0b40f0bc7bc3"). InnerVolumeSpecName "kube-api-access-cvbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.200794 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data" (OuterVolumeSpecName: "config-data") pod "38b2334c-7b03-45cb-a780-0b40f0bc7bc3" (UID: "38b2334c-7b03-45cb-a780-0b40f0bc7bc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.273196 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.273235 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvbx6\" (UniqueName: \"kubernetes.io/projected/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-kube-api-access-cvbx6\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.273248 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b2334c-7b03-45cb-a780-0b40f0bc7bc3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.425128 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c" path="/var/lib/kubelet/pods/d4fb7d2b-c0f6-4425-9d0b-c0431ecacf5c/volumes" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.651129 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" event={"ID":"38b2334c-7b03-45cb-a780-0b40f0bc7bc3","Type":"ContainerDied","Data":"e607f0bb2dbf1533087a3c30f69a1495366820cc5d82e03ceba034a7c0f99d5c"} Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.651181 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e607f0bb2dbf1533087a3c30f69a1495366820cc5d82e03ceba034a7c0f99d5c" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.651185 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrdgm" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.737650 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:26:55 crc kubenswrapper[4828]: E1129 07:26:55.738712 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b2334c-7b03-45cb-a780-0b40f0bc7bc3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.738743 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b2334c-7b03-45cb-a780-0b40f0bc7bc3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.739329 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b2334c-7b03-45cb-a780-0b40f0bc7bc3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.746949 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.751117 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 07:26:55 crc kubenswrapper[4828]: I1129 07:26:55.764036 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.004921 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2wd\" (UniqueName: \"kubernetes.io/projected/c20767ac-ea5b-4bde-80f3-9e6355039f15-kube-api-access-zc2wd\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.005529 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.005593 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.107637 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.107727 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.107879 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc2wd\" (UniqueName: \"kubernetes.io/projected/c20767ac-ea5b-4bde-80f3-9e6355039f15-kube-api-access-zc2wd\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.114303 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.114387 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20767ac-ea5b-4bde-80f3-9e6355039f15-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.127117 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc2wd\" (UniqueName: \"kubernetes.io/projected/c20767ac-ea5b-4bde-80f3-9e6355039f15-kube-api-access-zc2wd\") pod \"nova-cell1-conductor-0\" (UID: \"c20767ac-ea5b-4bde-80f3-9e6355039f15\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.386451 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.665889 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerStarted","Data":"d494a690fed971462e896f860f1a526a7046f6f1a6f1e52e305bfd38abff62cd"} Nov 29 07:26:56 crc kubenswrapper[4828]: I1129 07:26:56.843077 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:26:56 crc kubenswrapper[4828]: W1129 07:26:56.850263 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20767ac_ea5b_4bde_80f3_9e6355039f15.slice/crio-e2e0f736fd2a81538e27f866a2b1ef575b2669b3df277b81717d8618ebabd32d WatchSource:0}: Error finding container e2e0f736fd2a81538e27f866a2b1ef575b2669b3df277b81717d8618ebabd32d: Status 404 returned error can't find the container with id e2e0f736fd2a81538e27f866a2b1ef575b2669b3df277b81717d8618ebabd32d Nov 29 07:26:57 crc kubenswrapper[4828]: I1129 07:26:57.678239 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c20767ac-ea5b-4bde-80f3-9e6355039f15","Type":"ContainerStarted","Data":"e29db4dfb803f2647dad3b5b8ee4e90a0f949e63c2700fc573ddf605272b7b4f"} Nov 29 07:26:57 crc kubenswrapper[4828]: I1129 07:26:57.678593 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c20767ac-ea5b-4bde-80f3-9e6355039f15","Type":"ContainerStarted","Data":"e2e0f736fd2a81538e27f866a2b1ef575b2669b3df277b81717d8618ebabd32d"} Nov 29 07:26:57 crc kubenswrapper[4828]: I1129 07:26:57.679806 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 29 07:26:57 crc kubenswrapper[4828]: I1129 07:26:57.698952 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6989292320000002 podStartE2EDuration="2.698929232s" podCreationTimestamp="2025-11-29 07:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:57.693449921 +0000 UTC m=+1557.315525979" watchObservedRunningTime="2025-11-29 07:26:57.698929232 +0000 UTC m=+1557.321005290" Nov 29 07:26:58 crc kubenswrapper[4828]: I1129 07:26:58.690793 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerStarted","Data":"45356f335dd22e3e737caef4aea26236f93233b3f4a9acbda8c4f8bfd66ab0df"} Nov 29 07:26:58 crc kubenswrapper[4828]: I1129 07:26:58.993332 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 07:26:59 crc kubenswrapper[4828]: I1129 07:26:59.477189 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:26:59 crc kubenswrapper[4828]: I1129 07:26:59.477507 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-log" containerID="cri-o://d5ae66dd6b7791b2ea7d247fd3207cd9f29f733af7b0707bb82333b371ac8950" gracePeriod=30 Nov 29 07:26:59 crc kubenswrapper[4828]: I1129 07:26:59.478062 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-api" containerID="cri-o://f753196bef6670b63d82e607d64c67a88fc6029114d886d2443f936f7987240f" gracePeriod=30 Nov 29 07:26:59 crc kubenswrapper[4828]: I1129 07:26:59.705234 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerStarted","Data":"7ceb12baccb2ca9e97534dfe0ac78f43249ea03690692806897a9aa4a0f20231"} Nov 29 07:27:00 crc kubenswrapper[4828]: I1129 07:27:00.716536 4828 generic.go:334] "Generic (PLEG): container finished" podID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerID="d5ae66dd6b7791b2ea7d247fd3207cd9f29f733af7b0707bb82333b371ac8950" exitCode=143 Nov 29 07:27:00 crc kubenswrapper[4828]: I1129 07:27:00.716675 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerDied","Data":"d5ae66dd6b7791b2ea7d247fd3207cd9f29f733af7b0707bb82333b371ac8950"} Nov 29 07:27:01 crc kubenswrapper[4828]: I1129 07:27:01.729493 4828 generic.go:334] "Generic (PLEG): container finished" podID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerID="f753196bef6670b63d82e607d64c67a88fc6029114d886d2443f936f7987240f" exitCode=0 Nov 29 07:27:01 crc kubenswrapper[4828]: I1129 07:27:01.729550 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerDied","Data":"f753196bef6670b63d82e607d64c67a88fc6029114d886d2443f936f7987240f"} Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.493393 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.644802 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs\") pod \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.644980 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kh8p\" (UniqueName: \"kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p\") pod \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.645089 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle\") pod \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.645108 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data\") pod \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\" (UID: \"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd\") " Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.646236 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs" (OuterVolumeSpecName: "logs") pod "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" (UID: "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.652289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p" (OuterVolumeSpecName: "kube-api-access-5kh8p") pod "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" (UID: "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd"). InnerVolumeSpecName "kube-api-access-5kh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.679706 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" (UID: "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.696482 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data" (OuterVolumeSpecName: "config-data") pod "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" (UID: "29e917f5-3c75-474c-8d5c-1ef02ba2b2cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.746997 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kh8p\" (UniqueName: \"kubernetes.io/projected/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-kube-api-access-5kh8p\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.747042 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.747055 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.747068 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.756435 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29e917f5-3c75-474c-8d5c-1ef02ba2b2cd","Type":"ContainerDied","Data":"7f2e083f813df2651e718ddefdb4b1ce971cc434255d12a009e1f6ff1fe6a2c6"} Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.756502 4828 scope.go:117] "RemoveContainer" containerID="f753196bef6670b63d82e607d64c67a88fc6029114d886d2443f936f7987240f" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.756538 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.800412 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.805616 4828 scope.go:117] "RemoveContainer" containerID="d5ae66dd6b7791b2ea7d247fd3207cd9f29f733af7b0707bb82333b371ac8950" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.829365 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.855891 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:03 crc kubenswrapper[4828]: E1129 07:27:03.856419 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-log" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.856442 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-log" Nov 29 07:27:03 crc kubenswrapper[4828]: E1129 07:27:03.856453 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-api" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.856462 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-api" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.856738 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-api" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.856779 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" containerName="nova-api-log" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.858143 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:03 crc kubenswrapper[4828]: I1129 07:27:03.868236 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.051354 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.052325 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhl45\" (UniqueName: \"kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.052397 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.052488 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.154115 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.154194 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhl45\" (UniqueName: \"kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.154257 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.154321 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.154878 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.159038 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.159743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.172980 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhl45\" (UniqueName: \"kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45\") pod \"nova-api-0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.181090 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.215856 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.610685 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:04 crc kubenswrapper[4828]: I1129 07:27:04.771013 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerStarted","Data":"59ae5387516b7a1b960394d127740ea60530d0c395687d715087d617230bc5b3"} Nov 29 07:27:05 crc kubenswrapper[4828]: I1129 07:27:05.434763 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e917f5-3c75-474c-8d5c-1ef02ba2b2cd" path="/var/lib/kubelet/pods/29e917f5-3c75-474c-8d5c-1ef02ba2b2cd/volumes" Nov 29 07:27:05 crc kubenswrapper[4828]: I1129 07:27:05.785894 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerStarted","Data":"c41edc40e5d1a8652545fc5d12c154b5c69fe22e7378f6ec7df8c93e65b451cb"} Nov 29 07:27:05 crc kubenswrapper[4828]: I1129 07:27:05.785938 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerStarted","Data":"b6e2d20b15645413b0d5899c77adc882c6d951d04cfebbdb5dcb6e811cf402ea"} Nov 29 07:27:05 crc kubenswrapper[4828]: I1129 07:27:05.812757 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.812736907 podStartE2EDuration="2.812736907s" podCreationTimestamp="2025-11-29 07:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:05.804903035 +0000 UTC m=+1565.426979113" watchObservedRunningTime="2025-11-29 07:27:05.812736907 +0000 UTC m=+1565.434812965" Nov 29 07:27:06 crc kubenswrapper[4828]: I1129 07:27:06.429534 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 29 07:27:07 crc kubenswrapper[4828]: I1129 07:27:07.809564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerStarted","Data":"753c6575f90b5cf6295543e26b09f67c25a1779c068d688bea1c49b89a4992bf"} Nov 29 07:27:07 crc kubenswrapper[4828]: I1129 07:27:07.809898 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:27:07 crc kubenswrapper[4828]: I1129 07:27:07.840440 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5199074599999998 podStartE2EDuration="14.840414722s" podCreationTimestamp="2025-11-29 07:26:53 +0000 UTC" firstStartedPulling="2025-11-29 07:26:54.506038219 +0000 UTC m=+1554.128114277" lastFinishedPulling="2025-11-29 07:27:05.826545481 +0000 UTC m=+1565.448621539" observedRunningTime="2025-11-29 07:27:07.8290231 +0000 UTC m=+1567.451099168" watchObservedRunningTime="2025-11-29 07:27:07.840414722 +0000 UTC m=+1567.462490780" Nov 29 07:27:11 crc kubenswrapper[4828]: I1129 07:27:11.486754 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:27:11 crc kubenswrapper[4828]: I1129 07:27:11.487186 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:27:14 crc kubenswrapper[4828]: I1129 07:27:14.182694 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:27:14 crc kubenswrapper[4828]: I1129 07:27:14.183057 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:27:15 crc kubenswrapper[4828]: I1129 07:27:15.225660 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:15 crc kubenswrapper[4828]: I1129 07:27:15.225701 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:18 crc kubenswrapper[4828]: E1129 07:27:18.790174 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e45763_9f9d_4ce2_adc6_2f85184fefd4.slice/crio-conmon-a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e45763_9f9d_4ce2_adc6_2f85184fefd4.slice/crio-a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df68e9f_a72e_4e1b_993b_b7d5b9677fd6.slice/crio-conmon-693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.947459 4828 generic.go:334] "Generic (PLEG): container finished" podID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" containerID="f0dfdc647e462852c4bb506b4b2a2b6dd3764f0a0b45c8e722c325f30782b689" exitCode=137 Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.947570 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f","Type":"ContainerDied","Data":"f0dfdc647e462852c4bb506b4b2a2b6dd3764f0a0b45c8e722c325f30782b689"} Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.947655 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f","Type":"ContainerDied","Data":"b40085dcb7981a955e92e289971f73cc4deebba1132f21fcf138665f5365de6a"} Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.947670 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40085dcb7981a955e92e289971f73cc4deebba1132f21fcf138665f5365de6a" Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.950097 4828 generic.go:334] "Generic (PLEG): container finished" podID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" containerID="a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0" exitCode=137 Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.950200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e45763-9f9d-4ce2-adc6-2f85184fefd4","Type":"ContainerDied","Data":"a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0"} Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.952639 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerDied","Data":"693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d"} Nov 29 07:27:18 crc kubenswrapper[4828]: I1129 07:27:18.952517 4828 generic.go:334] "Generic (PLEG): container finished" podID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerID="693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d" exitCode=137 Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.081075 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.262813 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrv4p\" (UniqueName: \"kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p\") pod \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.263542 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data\") pod \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.263582 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle\") pod \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\" (UID: \"a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.273072 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p" (OuterVolumeSpecName: "kube-api-access-rrv4p") pod "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" (UID: "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f"). InnerVolumeSpecName "kube-api-access-rrv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.311680 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" (UID: "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.332948 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data" (OuterVolumeSpecName: "config-data") pod "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" (UID: "a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.366399 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.366439 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.366452 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrv4p\" (UniqueName: \"kubernetes.io/projected/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f-kube-api-access-rrv4p\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.422386 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.511721 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.569545 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle\") pod \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.569688 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkrh\" (UniqueName: \"kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh\") pod \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.569843 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data\") pod \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.569870 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs\") pod \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\" (UID: \"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.570649 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs" (OuterVolumeSpecName: "logs") pod "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" (UID: "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.575584 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh" (OuterVolumeSpecName: "kube-api-access-vtkrh") pod "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" (UID: "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6"). InnerVolumeSpecName "kube-api-access-vtkrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.602586 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data" (OuterVolumeSpecName: "config-data") pod "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" (UID: "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.604488 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" (UID: "6df68e9f-a72e-4e1b-993b-b7d5b9677fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.672497 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crz6\" (UniqueName: \"kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6\") pod \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.672653 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle\") pod \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.672752 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data\") pod \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\" (UID: \"83e45763-9f9d-4ce2-adc6-2f85184fefd4\") " Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.673730 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.673781 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.673801 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.673815 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkrh\" (UniqueName: \"kubernetes.io/projected/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6-kube-api-access-vtkrh\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.676345 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6" (OuterVolumeSpecName: "kube-api-access-4crz6") pod "83e45763-9f9d-4ce2-adc6-2f85184fefd4" (UID: "83e45763-9f9d-4ce2-adc6-2f85184fefd4"). InnerVolumeSpecName "kube-api-access-4crz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.703956 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data" (OuterVolumeSpecName: "config-data") pod "83e45763-9f9d-4ce2-adc6-2f85184fefd4" (UID: "83e45763-9f9d-4ce2-adc6-2f85184fefd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.706081 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83e45763-9f9d-4ce2-adc6-2f85184fefd4" (UID: "83e45763-9f9d-4ce2-adc6-2f85184fefd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.775951 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crz6\" (UniqueName: \"kubernetes.io/projected/83e45763-9f9d-4ce2-adc6-2f85184fefd4-kube-api-access-4crz6\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.776006 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.776020 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e45763-9f9d-4ce2-adc6-2f85184fefd4-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.965507 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e45763-9f9d-4ce2-adc6-2f85184fefd4","Type":"ContainerDied","Data":"ba5e3d0ccf38d2368771f88c1e2565b497d30fe1c511ca93f83060f5e33b42ff"} Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.965572 4828 scope.go:117] "RemoveContainer" containerID="a90307cef54a6ead56925f280411cc6e2241b86030b4efaead46f880339d5dd0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.965580 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.969539 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.969516 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6df68e9f-a72e-4e1b-993b-b7d5b9677fd6","Type":"ContainerDied","Data":"bd49e633e8e63b932f11e6502e7bc7103e8a8831d5eba6c5eb0080540b775aab"} Nov 29 07:27:19 crc kubenswrapper[4828]: I1129 07:27:19.969546 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.001393 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.002330 4828 scope.go:117] "RemoveContainer" containerID="693984c129ff42ea06348265749095aa794e1690959fe4bf9b805a96548bb62d" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.020502 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.039411 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.054565 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.067352 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: E1129 07:27:20.067989 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068019 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:27:20 crc kubenswrapper[4828]: E1129 07:27:20.068059 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-log" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068068 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-log" Nov 29 07:27:20 crc kubenswrapper[4828]: E1129 07:27:20.068084 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" containerName="nova-scheduler-scheduler" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068094 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" containerName="nova-scheduler-scheduler" Nov 29 07:27:20 crc kubenswrapper[4828]: E1129 07:27:20.068115 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-metadata" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068123 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-metadata" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068385 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068403 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" containerName="nova-scheduler-scheduler" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068411 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-log" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.068432 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" containerName="nova-metadata-metadata" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.069358 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.077610 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.077871 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.078041 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.078221 4828 scope.go:117] "RemoveContainer" containerID="2fd1a9fe6de9eed5521a75874efebf750fd9d478f8f873ec097057bfcfe5ea1a" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.103360 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.113760 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.125575 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.127866 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.130812 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.131095 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.141410 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.154152 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.165082 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.166555 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.169434 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.177008 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.185503 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.185558 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.185597 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.185623 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.185641 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xql\" (UniqueName: \"kubernetes.io/projected/508b6f36-4c27-431d-aafa-94c8150647a4-kube-api-access-z7xql\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287565 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287635 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nz7\" (UniqueName: \"kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287735 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287761 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287790 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287815 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tpf\" (UniqueName: \"kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287834 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287860 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287939 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.287955 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xql\" (UniqueName: \"kubernetes.io/projected/508b6f36-4c27-431d-aafa-94c8150647a4-kube-api-access-z7xql\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.294449 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.294583 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.295405 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.308717 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/508b6f36-4c27-431d-aafa-94c8150647a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.316156 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xql\" (UniqueName: \"kubernetes.io/projected/508b6f36-4c27-431d-aafa-94c8150647a4-kube-api-access-z7xql\") pod \"nova-cell1-novncproxy-0\" (UID: \"508b6f36-4c27-431d-aafa-94c8150647a4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.391664 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tpf\" (UniqueName: \"kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.391754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.392567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.393247 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.393447 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.393847 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.393926 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69nz7\" (UniqueName: \"kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.394503 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.394582 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.395860 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.397450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.399251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.399661 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.410086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.410534 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tpf\" (UniqueName: \"kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf\") pod \"nova-scheduler-0\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.413944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69nz7\" (UniqueName: \"kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7\") pod \"nova-metadata-0\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.416924 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.452950 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.488618 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.964488 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:27:20 crc kubenswrapper[4828]: I1129 07:27:20.992893 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"508b6f36-4c27-431d-aafa-94c8150647a4","Type":"ContainerStarted","Data":"047ad72b22556bbd6cf61007dcc4addf5fba8cb86eea07673b48931b1c447291"} Nov 29 07:27:21 crc kubenswrapper[4828]: I1129 07:27:21.068953 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:27:21 crc kubenswrapper[4828]: W1129 07:27:21.079704 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d31dd62_6c7f_4529_8da5_cfb615b653e2.slice/crio-b030bcee3cc018e37aa5db3b301cc247aefa79e45bce4350724ba13c1cf31790 WatchSource:0}: Error finding container b030bcee3cc018e37aa5db3b301cc247aefa79e45bce4350724ba13c1cf31790: Status 404 returned error can't find the container with id b030bcee3cc018e37aa5db3b301cc247aefa79e45bce4350724ba13c1cf31790 Nov 29 07:27:21 crc kubenswrapper[4828]: I1129 07:27:21.153451 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:27:21 crc kubenswrapper[4828]: I1129 07:27:21.424409 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df68e9f-a72e-4e1b-993b-b7d5b9677fd6" path="/var/lib/kubelet/pods/6df68e9f-a72e-4e1b-993b-b7d5b9677fd6/volumes" Nov 29 07:27:21 crc kubenswrapper[4828]: I1129 07:27:21.425300 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e45763-9f9d-4ce2-adc6-2f85184fefd4" path="/var/lib/kubelet/pods/83e45763-9f9d-4ce2-adc6-2f85184fefd4/volumes" Nov 29 07:27:21 crc kubenswrapper[4828]: I1129 07:27:21.425922 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f" path="/var/lib/kubelet/pods/a97fa61b-8d8c-45c9-9ab2-6fba48e1bf3f/volumes" Nov 29 07:27:22 crc kubenswrapper[4828]: I1129 07:27:22.003620 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"508b6f36-4c27-431d-aafa-94c8150647a4","Type":"ContainerStarted","Data":"b7da05ac21c1e2e32c7c973f3656f53b71d1bf15d91679526fee6d24138a9700"} Nov 29 07:27:22 crc kubenswrapper[4828]: I1129 07:27:22.006698 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6977eeb3-82a7-42c7-9bae-29b46a93a75e","Type":"ContainerStarted","Data":"8a2cc258bfc021243f2d3bb4f2682c836f6184b28366a147252f36abd600933a"} Nov 29 07:27:22 crc kubenswrapper[4828]: I1129 07:27:22.006762 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6977eeb3-82a7-42c7-9bae-29b46a93a75e","Type":"ContainerStarted","Data":"a0d0de89ae683461dd3298626885d0b065ce570f041206ff00fac913ed0b1326"} Nov 29 07:27:22 crc kubenswrapper[4828]: I1129 07:27:22.009832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerStarted","Data":"cdccc5e04b8d60115a27a2d2b6f11dde463c4917d0a99e668f4a909222886cb0"} Nov 29 07:27:22 crc kubenswrapper[4828]: I1129 07:27:22.009882 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerStarted","Data":"b030bcee3cc018e37aa5db3b301cc247aefa79e45bce4350724ba13c1cf31790"} Nov 29 07:27:23 crc kubenswrapper[4828]: I1129 07:27:23.036506 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerStarted","Data":"c2cc660a44a511085e10051976c006860703bb63faf27ed5cdb193e07d2a45d2"} Nov 29 07:27:23 crc kubenswrapper[4828]: I1129 07:27:23.056092 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.056072432 podStartE2EDuration="3.056072432s" podCreationTimestamp="2025-11-29 07:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:23.050986802 +0000 UTC m=+1582.673062870" watchObservedRunningTime="2025-11-29 07:27:23.056072432 +0000 UTC m=+1582.678148490" Nov 29 07:27:23 crc kubenswrapper[4828]: I1129 07:27:23.072753 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.07273027 podStartE2EDuration="3.07273027s" podCreationTimestamp="2025-11-29 07:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:23.068473841 +0000 UTC m=+1582.690549909" watchObservedRunningTime="2025-11-29 07:27:23.07273027 +0000 UTC m=+1582.694806328" Nov 29 07:27:23 crc kubenswrapper[4828]: I1129 07:27:23.093817 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.093793381 podStartE2EDuration="4.093793381s" podCreationTimestamp="2025-11-29 07:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:23.084608965 +0000 UTC m=+1582.706685023" watchObservedRunningTime="2025-11-29 07:27:23.093793381 +0000 UTC m=+1582.715869439" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.186313 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.186432 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.186974 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.186999 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.189665 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.190098 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.409630 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.412610 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.426675 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.487408 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.488027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.488321 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.488786 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.488877 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.488980 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn66j\" (UniqueName: \"kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.591535 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.591673 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.591759 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.591884 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn66j\" (UniqueName: \"kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.592067 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.592186 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.592906 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.593011 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.593122 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.593146 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.593367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.615967 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn66j\" (UniqueName: \"kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j\") pod \"dnsmasq-dns-79b5d74c8c-f5tnw\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.788415 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:24 crc kubenswrapper[4828]: I1129 07:27:24.989768 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:27:25 crc kubenswrapper[4828]: I1129 07:27:25.330393 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:27:25 crc kubenswrapper[4828]: I1129 07:27:25.425515 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:25 crc kubenswrapper[4828]: I1129 07:27:25.453544 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:27:25 crc kubenswrapper[4828]: I1129 07:27:25.453596 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:27:25 crc kubenswrapper[4828]: I1129 07:27:25.489245 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.080568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerStarted","Data":"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349"} Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.080876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerStarted","Data":"b06ea8e26e89bb42c545a86602de2b165acee8df770a9e2ca79d19310cee55a0"} Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.876261 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.876854 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-central-agent" containerID="cri-o://d494a690fed971462e896f860f1a526a7046f6f1a6f1e52e305bfd38abff62cd" gracePeriod=30 Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.876931 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="proxy-httpd" containerID="cri-o://753c6575f90b5cf6295543e26b09f67c25a1779c068d688bea1c49b89a4992bf" gracePeriod=30 Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.876971 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="sg-core" containerID="cri-o://7ceb12baccb2ca9e97534dfe0ac78f43249ea03690692806897a9aa4a0f20231" gracePeriod=30 Nov 29 07:27:26 crc kubenswrapper[4828]: I1129 07:27:26.876983 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-notification-agent" containerID="cri-o://45356f335dd22e3e737caef4aea26236f93233b3f4a9acbda8c4f8bfd66ab0df" gracePeriod=30 Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.058748 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.093423 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerID="7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349" exitCode=0 Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.093544 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerDied","Data":"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349"} Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099345 4828 generic.go:334] "Generic (PLEG): container finished" podID="04f457b1-ae28-4750-8777-9cd632aa4678" containerID="753c6575f90b5cf6295543e26b09f67c25a1779c068d688bea1c49b89a4992bf" exitCode=0 Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099394 4828 generic.go:334] "Generic (PLEG): container finished" podID="04f457b1-ae28-4750-8777-9cd632aa4678" containerID="7ceb12baccb2ca9e97534dfe0ac78f43249ea03690692806897a9aa4a0f20231" exitCode=2 Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099614 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" containerID="cri-o://b6e2d20b15645413b0d5899c77adc882c6d951d04cfebbdb5dcb6e811cf402ea" gracePeriod=30 Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099725 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerDied","Data":"753c6575f90b5cf6295543e26b09f67c25a1779c068d688bea1c49b89a4992bf"} Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerDied","Data":"7ceb12baccb2ca9e97534dfe0ac78f43249ea03690692806897a9aa4a0f20231"} Nov 29 07:27:27 crc kubenswrapper[4828]: I1129 07:27:27.099794 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" containerID="cri-o://c41edc40e5d1a8652545fc5d12c154b5c69fe22e7378f6ec7df8c93e65b451cb" gracePeriod=30 Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.110666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerStarted","Data":"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e"} Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.111882 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.113657 4828 generic.go:334] "Generic (PLEG): container finished" podID="04f457b1-ae28-4750-8777-9cd632aa4678" containerID="d494a690fed971462e896f860f1a526a7046f6f1a6f1e52e305bfd38abff62cd" exitCode=0 Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.113729 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerDied","Data":"d494a690fed971462e896f860f1a526a7046f6f1a6f1e52e305bfd38abff62cd"} Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.115942 4828 generic.go:334] "Generic (PLEG): container finished" podID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerID="b6e2d20b15645413b0d5899c77adc882c6d951d04cfebbdb5dcb6e811cf402ea" exitCode=143 Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.116090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerDied","Data":"b6e2d20b15645413b0d5899c77adc882c6d951d04cfebbdb5dcb6e811cf402ea"} Nov 29 07:27:28 crc kubenswrapper[4828]: I1129 07:27:28.156759 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" podStartSLOduration=4.156731211 podStartE2EDuration="4.156731211s" podCreationTimestamp="2025-11-29 07:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:28.139416356 +0000 UTC m=+1587.761492414" watchObservedRunningTime="2025-11-29 07:27:28.156731211 +0000 UTC m=+1587.778807269" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.417789 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.434111 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.453510 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.453615 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.492381 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:27:30 crc kubenswrapper[4828]: I1129 07:27:30.530051 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.162560 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.172886 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.315182 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q954t"] Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.316673 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.319732 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.325364 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.330293 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q954t"] Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.425841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbk8\" (UniqueName: \"kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.426106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.426154 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.426206 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.469506 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.469546 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.527688 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.527751 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.527780 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.527829 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbk8\" (UniqueName: \"kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.541411 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.541813 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.542217 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.548334 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbk8\" (UniqueName: \"kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8\") pod \"nova-cell1-cell-mapping-q954t\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:31 crc kubenswrapper[4828]: I1129 07:27:31.636496 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:27:32 crc kubenswrapper[4828]: I1129 07:27:32.112308 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q954t"] Nov 29 07:27:32 crc kubenswrapper[4828]: I1129 07:27:32.154917 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q954t" event={"ID":"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb","Type":"ContainerStarted","Data":"0f0cbf92b52783298d33432e6206d23f0f5d53188777830995aab87702aa5bda"} Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.183101 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": dial tcp 10.217.0.203:8774: connect: connection refused" Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.183240 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": dial tcp 10.217.0.203:8774: connect: connection refused" Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.512004 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.512557 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerName="kube-state-metrics" containerID="cri-o://f4dcf140536ad3e36b817202f8bb975b0fd7e7879bc7cbdc96e57a8140a803f5" gracePeriod=30 Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.790461 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.866599 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:27:34 crc kubenswrapper[4828]: I1129 07:27:34.866907 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" containerID="cri-o://902456cd170b0b1c264068107fd6b8a3fdac983b87c0191b130022eafcce2f67" gracePeriod=10 Nov 29 07:27:35 crc kubenswrapper[4828]: I1129 07:27:35.959504 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Nov 29 07:27:37 crc kubenswrapper[4828]: I1129 07:27:37.912613 4828 generic.go:334] "Generic (PLEG): container finished" podID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerID="c41edc40e5d1a8652545fc5d12c154b5c69fe22e7378f6ec7df8c93e65b451cb" exitCode=-1 Nov 29 07:27:37 crc kubenswrapper[4828]: I1129 07:27:37.914713 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerDied","Data":"c41edc40e5d1a8652545fc5d12c154b5c69fe22e7378f6ec7df8c93e65b451cb"} Nov 29 07:27:38 crc kubenswrapper[4828]: I1129 07:27:38.045752 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.023901 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.026113 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.030774 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.487423 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.487518 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.487594 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.488782 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:27:41 crc kubenswrapper[4828]: I1129 07:27:41.488858 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" gracePeriod=600 Nov 29 07:27:42 crc kubenswrapper[4828]: I1129 07:27:42.661212 4828 generic.go:334] "Generic (PLEG): container finished" podID="04f457b1-ae28-4750-8777-9cd632aa4678" containerID="45356f335dd22e3e737caef4aea26236f93233b3f4a9acbda8c4f8bfd66ab0df" exitCode=0 Nov 29 07:27:42 crc kubenswrapper[4828]: I1129 07:27:42.661360 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerDied","Data":"45356f335dd22e3e737caef4aea26236f93233b3f4a9acbda8c4f8bfd66ab0df"} Nov 29 07:27:42 crc kubenswrapper[4828]: I1129 07:27:42.688512 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.180:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:43 crc kubenswrapper[4828]: I1129 07:27:43.045437 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.790868 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896612 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896675 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896773 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppw4d\" (UniqueName: \"kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896866 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896941 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.896977 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.897019 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd\") pod \"04f457b1-ae28-4750-8777-9cd632aa4678\" (UID: \"04f457b1-ae28-4750-8777-9cd632aa4678\") " Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.897456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.897539 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.897752 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.902736 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts" (OuterVolumeSpecName: "scripts") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.902912 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d" (OuterVolumeSpecName: "kube-api-access-ppw4d") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "kube-api-access-ppw4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.959543 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.999011 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:44 crc kubenswrapper[4828]: I1129 07:27:44.999912 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f457b1-ae28-4750-8777-9cd632aa4678-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:44.999936 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:44.999950 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppw4d\" (UniqueName: \"kubernetes.io/projected/04f457b1-ae28-4750-8777-9cd632aa4678-kube-api-access-ppw4d\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.001667 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.027916 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data" (OuterVolumeSpecName: "config-data") pod "04f457b1-ae28-4750-8777-9cd632aa4678" (UID: "04f457b1-ae28-4750-8777-9cd632aa4678"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.101245 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.101305 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f457b1-ae28-4750-8777-9cd632aa4678-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.271047 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.410291 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.410660 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhl45\" (UniqueName: \"kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45\") pod \"7da5a0ad-86da-4601-af2a-9674af58b6e0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.410714 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data\") pod \"7da5a0ad-86da-4601-af2a-9674af58b6e0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.410763 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle\") pod \"7da5a0ad-86da-4601-af2a-9674af58b6e0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.410814 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs\") pod \"7da5a0ad-86da-4601-af2a-9674af58b6e0\" (UID: \"7da5a0ad-86da-4601-af2a-9674af58b6e0\") " Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.411766 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs" (OuterVolumeSpecName: "logs") pod "7da5a0ad-86da-4601-af2a-9674af58b6e0" (UID: "7da5a0ad-86da-4601-af2a-9674af58b6e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.414962 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45" (OuterVolumeSpecName: "kube-api-access-qhl45") pod "7da5a0ad-86da-4601-af2a-9674af58b6e0" (UID: "7da5a0ad-86da-4601-af2a-9674af58b6e0"). InnerVolumeSpecName "kube-api-access-qhl45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.452077 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data" (OuterVolumeSpecName: "config-data") pod "7da5a0ad-86da-4601-af2a-9674af58b6e0" (UID: "7da5a0ad-86da-4601-af2a-9674af58b6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.468500 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.468586 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7da5a0ad-86da-4601-af2a-9674af58b6e0" (UID: "7da5a0ad-86da-4601-af2a-9674af58b6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.512643 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8kzw\" (UniqueName: \"kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw\") pod \"da136d32-fe97-49ae-b9eb-c94dda775a13\" (UID: \"da136d32-fe97-49ae-b9eb-c94dda775a13\") " Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.513953 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhl45\" (UniqueName: \"kubernetes.io/projected/7da5a0ad-86da-4601-af2a-9674af58b6e0-kube-api-access-qhl45\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.513984 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.513996 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da5a0ad-86da-4601-af2a-9674af58b6e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.514007 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da5a0ad-86da-4601-af2a-9674af58b6e0-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.516547 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw" (OuterVolumeSpecName: "kube-api-access-z8kzw") pod "da136d32-fe97-49ae-b9eb-c94dda775a13" (UID: "da136d32-fe97-49ae-b9eb-c94dda775a13"). InnerVolumeSpecName "kube-api-access-z8kzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4828]: I1129 07:27:45.615673 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8kzw\" (UniqueName: \"kubernetes.io/projected/da136d32-fe97-49ae-b9eb-c94dda775a13-kube-api-access-z8kzw\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.407815 4828 generic.go:334] "Generic (PLEG): container finished" podID="79e77aa1-bd34-4449-9880-10c2160b044b" containerID="902456cd170b0b1c264068107fd6b8a3fdac983b87c0191b130022eafcce2f67" exitCode=0 Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.408195 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" event={"ID":"79e77aa1-bd34-4449-9880-10c2160b044b","Type":"ContainerDied","Data":"902456cd170b0b1c264068107fd6b8a3fdac983b87c0191b130022eafcce2f67"} Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.411031 4828 generic.go:334] "Generic (PLEG): container finished" podID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerID="f4dcf140536ad3e36b817202f8bb975b0fd7e7879bc7cbdc96e57a8140a803f5" exitCode=2 Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.411063 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"da136d32-fe97-49ae-b9eb-c94dda775a13","Type":"ContainerDied","Data":"f4dcf140536ad3e36b817202f8bb975b0fd7e7879bc7cbdc96e57a8140a803f5"} Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.411087 4828 scope.go:117] "RemoveContainer" containerID="f4dcf140536ad3e36b817202f8bb975b0fd7e7879bc7cbdc96e57a8140a803f5" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.411095 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.456976 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.464012 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472437 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.472869 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerName="kube-state-metrics" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472887 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerName="kube-state-metrics" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.472903 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472911 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.472918 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472925 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.472943 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472949 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.472967 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.472974 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.473026 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473032 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.473042 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473049 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473222 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-api" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473239 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" containerName="kube-state-metrics" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473248 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" containerName="nova-api-log" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473283 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473293 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473304 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473313 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.473959 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.476701 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.476921 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.480928 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.531808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.531856 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.531886 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmlx\" (UniqueName: \"kubernetes.io/projected/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-api-access-fqmlx\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.531990 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.633311 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.633439 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.633457 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.633482 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqmlx\" (UniqueName: \"kubernetes.io/projected/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-api-access-fqmlx\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.639370 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.639925 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.640450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.650289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqmlx\" (UniqueName: \"kubernetes.io/projected/02492ec5-a65e-4179-aff9-b5d25154f8d2-kube-api-access-fqmlx\") pod \"kube-state-metrics-0\" (UID: \"02492ec5-a65e-4179-aff9-b5d25154f8d2\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: E1129 07:27:46.699235 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.708856 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.790218 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.836934 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.836986 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.837052 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.837193 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x9b9\" (UniqueName: \"kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.837295 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.837339 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb\") pod \"79e77aa1-bd34-4449-9880-10c2160b044b\" (UID: \"79e77aa1-bd34-4449-9880-10c2160b044b\") " Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.843328 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9" (OuterVolumeSpecName: "kube-api-access-8x9b9") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "kube-api-access-8x9b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.898483 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.899073 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config" (OuterVolumeSpecName: "config") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.915537 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.941634 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x9b9\" (UniqueName: \"kubernetes.io/projected/79e77aa1-bd34-4449-9880-10c2160b044b-kube-api-access-8x9b9\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.941996 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.942012 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.942025 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.942897 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4828]: I1129 07:27:46.957662 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79e77aa1-bd34-4449-9880-10c2160b044b" (UID: "79e77aa1-bd34-4449-9880-10c2160b044b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.043475 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.043504 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79e77aa1-bd34-4449-9880-10c2160b044b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.280835 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: W1129 07:27:47.282995 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02492ec5_a65e_4179_aff9_b5d25154f8d2.slice/crio-098bd42f6d06a09df3400f8a8dbfd59e1127fc26074216ef65ac947813070150 WatchSource:0}: Error finding container 098bd42f6d06a09df3400f8a8dbfd59e1127fc26074216ef65ac947813070150: Status 404 returned error can't find the container with id 098bd42f6d06a09df3400f8a8dbfd59e1127fc26074216ef65ac947813070150 Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.430956 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da136d32-fe97-49ae-b9eb-c94dda775a13" path="/var/lib/kubelet/pods/da136d32-fe97-49ae-b9eb-c94dda775a13/volumes" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.441173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f457b1-ae28-4750-8777-9cd632aa4678","Type":"ContainerDied","Data":"ebddff6fea5825605db696f8c19642851a4c4386eee7f7d6b717140569314ada"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.441603 4828 scope.go:117] "RemoveContainer" containerID="753c6575f90b5cf6295543e26b09f67c25a1779c068d688bea1c49b89a4992bf" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.441739 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.446340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7da5a0ad-86da-4601-af2a-9674af58b6e0","Type":"ContainerDied","Data":"59ae5387516b7a1b960394d127740ea60530d0c395687d715087d617230bc5b3"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.446547 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.451715 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" event={"ID":"79e77aa1-bd34-4449-9880-10c2160b044b","Type":"ContainerDied","Data":"b2d1c3495dedbb256a51b853cabe50a4ca64bcd930fe81ad0123c5e6fc806f3a"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.451774 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-kwkll" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.455136 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q954t" event={"ID":"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb","Type":"ContainerStarted","Data":"380c995285f836694980cd286ea0bb721e95681a155d99d25178a0de1d731651"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.476483 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" exitCode=0 Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.476590 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.477352 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:27:47 crc kubenswrapper[4828]: E1129 07:27:47.477633 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.483795 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02492ec5-a65e-4179-aff9-b5d25154f8d2","Type":"ContainerStarted","Data":"098bd42f6d06a09df3400f8a8dbfd59e1127fc26074216ef65ac947813070150"} Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.504907 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.505536 4828 scope.go:117] "RemoveContainer" containerID="7ceb12baccb2ca9e97534dfe0ac78f43249ea03690692806897a9aa4a0f20231" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.539221 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.553504 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: E1129 07:27:47.553963 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.553982 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" Nov 29 07:27:47 crc kubenswrapper[4828]: E1129 07:27:47.554008 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="init" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.554015 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="init" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.554189 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" containerName="dnsmasq-dns" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.555966 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.560419 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.560648 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.560786 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.567648 4828 scope.go:117] "RemoveContainer" containerID="45356f335dd22e3e737caef4aea26236f93233b3f4a9acbda8c4f8bfd66ab0df" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.579084 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.590473 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.610117 4828 scope.go:117] "RemoveContainer" containerID="d494a690fed971462e896f860f1a526a7046f6f1a6f1e52e305bfd38abff62cd" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.648797 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.660534 4828 scope.go:117] "RemoveContainer" containerID="c41edc40e5d1a8652545fc5d12c154b5c69fe22e7378f6ec7df8c93e65b451cb" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.666342 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.666810 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.666849 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.667071 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-config-data\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.667140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.667185 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.667307 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k7b9\" (UniqueName: \"kubernetes.io/projected/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-kube-api-access-6k7b9\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.667346 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-scripts\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.671035 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.705182 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.710941 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.711394 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.711462 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.722903 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.737250 4828 scope.go:117] "RemoveContainer" containerID="b6e2d20b15645413b0d5899c77adc882c6d951d04cfebbdb5dcb6e811cf402ea" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.740674 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-kwkll"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.757812 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.762721 4828 scope.go:117] "RemoveContainer" containerID="902456cd170b0b1c264068107fd6b8a3fdac983b87c0191b130022eafcce2f67" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769472 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769573 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769596 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769615 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-config-data\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769646 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769689 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769717 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k7b9\" (UniqueName: \"kubernetes.io/projected/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-kube-api-access-6k7b9\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769743 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-scripts\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.769853 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.771108 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q954t" podStartSLOduration=16.771086853 podStartE2EDuration="16.771086853s" podCreationTimestamp="2025-11-29 07:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:47.54560977 +0000 UTC m=+1607.167685828" watchObservedRunningTime="2025-11-29 07:27:47.771086853 +0000 UTC m=+1607.393162911" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.773120 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.777599 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.778047 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.783165 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-scripts\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.784074 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-config-data\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.787601 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.788722 4828 scope.go:117] "RemoveContainer" containerID="d3544e11c20f3606c8b099b1f8c9b00efeb66a5d69637e5bf8a6684b0bb5c41c" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.790427 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k7b9\" (UniqueName: \"kubernetes.io/projected/f8d3ec51-1a59-47fd-96f9-d97022ca7fcd-kube-api-access-6k7b9\") pod \"ceilometer-0\" (UID: \"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd\") " pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.808899 4828 scope.go:117] "RemoveContainer" containerID="f1153e52620f218b272037744559959e572334f0c0db38036c7622fd8f01d457" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.871913 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.871965 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.872010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.872067 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.872209 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnc76\" (UniqueName: \"kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.872285 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.906003 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.973807 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.973890 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.973990 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnc76\" (UniqueName: \"kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.974039 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.974111 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.974137 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.974554 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.978282 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.980037 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.980192 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.982652 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:47 crc kubenswrapper[4828]: I1129 07:27:47.994636 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnc76\" (UniqueName: \"kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76\") pod \"nova-api-0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " pod="openstack/nova-api-0" Nov 29 07:27:48 crc kubenswrapper[4828]: I1129 07:27:48.035863 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:27:48 crc kubenswrapper[4828]: I1129 07:27:48.418076 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:27:48 crc kubenswrapper[4828]: I1129 07:27:48.445142 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:48 crc kubenswrapper[4828]: I1129 07:27:48.495310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd","Type":"ContainerStarted","Data":"c316e5ab4e9b971d9d15a58002c23e5f0598f2397f17c91fe7159ed785303af3"} Nov 29 07:27:48 crc kubenswrapper[4828]: I1129 07:27:48.496613 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerStarted","Data":"5edf426f8423cadcdc2a196bda1c3e34661788670f4b635d4c9c4eceaaf73fcc"} Nov 29 07:27:49 crc kubenswrapper[4828]: I1129 07:27:49.423260 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f457b1-ae28-4750-8777-9cd632aa4678" path="/var/lib/kubelet/pods/04f457b1-ae28-4750-8777-9cd632aa4678/volumes" Nov 29 07:27:49 crc kubenswrapper[4828]: I1129 07:27:49.424700 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e77aa1-bd34-4449-9880-10c2160b044b" path="/var/lib/kubelet/pods/79e77aa1-bd34-4449-9880-10c2160b044b/volumes" Nov 29 07:27:49 crc kubenswrapper[4828]: I1129 07:27:49.425648 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da5a0ad-86da-4601-af2a-9674af58b6e0" path="/var/lib/kubelet/pods/7da5a0ad-86da-4601-af2a-9674af58b6e0/volumes" Nov 29 07:27:50 crc kubenswrapper[4828]: I1129 07:27:50.524874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerStarted","Data":"d4810129515d07f3b34a9d033f41496a32d197136bc4ab4ea9583d494120bb72"} Nov 29 07:27:54 crc kubenswrapper[4828]: I1129 07:27:54.690447 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.180:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:27:58 crc kubenswrapper[4828]: I1129 07:27:58.412785 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:27:58 crc kubenswrapper[4828]: E1129 07:27:58.414333 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:27:58 crc kubenswrapper[4828]: I1129 07:27:58.597637 4828 generic.go:334] "Generic (PLEG): container finished" podID="23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" containerID="380c995285f836694980cd286ea0bb721e95681a155d99d25178a0de1d731651" exitCode=0 Nov 29 07:27:58 crc kubenswrapper[4828]: I1129 07:27:58.597694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q954t" event={"ID":"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb","Type":"ContainerDied","Data":"380c995285f836694980cd286ea0bb721e95681a155d99d25178a0de1d731651"} Nov 29 07:27:59 crc kubenswrapper[4828]: I1129 07:27:59.968023 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.130135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle\") pod \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.130392 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts\") pod \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.130485 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data\") pod \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.130507 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbk8\" (UniqueName: \"kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8\") pod \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\" (UID: \"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb\") " Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.137457 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts" (OuterVolumeSpecName: "scripts") pod "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" (UID: "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.137517 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8" (OuterVolumeSpecName: "kube-api-access-8gbk8") pod "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" (UID: "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb"). InnerVolumeSpecName "kube-api-access-8gbk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.160801 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data" (OuterVolumeSpecName: "config-data") pod "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" (UID: "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.161399 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" (UID: "23daa968-b9e7-4bfe-88eb-4aebf6ac37cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.226665 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:00 crc kubenswrapper[4828]: E1129 07:28:00.227194 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" containerName="nova-manage" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.227212 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" containerName="nova-manage" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.227582 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" containerName="nova-manage" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.229520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.232798 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.232831 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.232846 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gbk8\" (UniqueName: \"kubernetes.io/projected/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-kube-api-access-8gbk8\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.232858 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.244927 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.334092 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.334171 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsfm\" (UniqueName: \"kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.334316 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.436218 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.437082 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsfm\" (UniqueName: \"kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.437096 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.437629 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.437978 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.465164 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsfm\" (UniqueName: \"kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm\") pod \"community-operators-8b9sw\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.630869 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q954t" event={"ID":"23daa968-b9e7-4bfe-88eb-4aebf6ac37cb","Type":"ContainerDied","Data":"0f0cbf92b52783298d33432e6206d23f0f5d53188777830995aab87702aa5bda"} Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.630926 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0cbf92b52783298d33432e6206d23f0f5d53188777830995aab87702aa5bda" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.630944 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q954t" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.646754 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.866935 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.889671 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.889980 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" containerName="nova-scheduler-scheduler" containerID="cri-o://8a2cc258bfc021243f2d3bb4f2682c836f6184b28366a147252f36abd600933a" gracePeriod=30 Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.901033 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.901687 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" containerID="cri-o://cdccc5e04b8d60115a27a2d2b6f11dde463c4917d0a99e668f4a909222886cb0" gracePeriod=30 Nov 29 07:28:00 crc kubenswrapper[4828]: I1129 07:28:00.902103 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" containerID="cri-o://c2cc660a44a511085e10051976c006860703bb63faf27ed5cdb193e07d2a45d2" gracePeriod=30 Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.282475 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:01 crc kubenswrapper[4828]: W1129 07:28:01.285484 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea97bab7_f379_4317_8a11_6035878e1085.slice/crio-c1b9941aee660ab23b5f73ebb433e841d2942736d206c073b2b9a68cb9e63bf9 WatchSource:0}: Error finding container c1b9941aee660ab23b5f73ebb433e841d2942736d206c073b2b9a68cb9e63bf9: Status 404 returned error can't find the container with id c1b9941aee660ab23b5f73ebb433e841d2942736d206c073b2b9a68cb9e63bf9 Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.671456 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerStarted","Data":"9bcaf9c09291be6a2e80af2aaa94f5796de5bb521bd640a8598e710478290f71"} Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.675586 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerStarted","Data":"c1b9941aee660ab23b5f73ebb433e841d2942736d206c073b2b9a68cb9e63bf9"} Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.678909 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02492ec5-a65e-4179-aff9-b5d25154f8d2","Type":"ContainerStarted","Data":"1959db66512cea3aaac5f2bdb8f716ab63392345deb7a34a0411906ca7ce2544"} Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.680663 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd","Type":"ContainerStarted","Data":"229bf11b79b7530e55716da5542285b2863e9c0aaa0185a5d091e9363d003584"} Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.683397 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerID="cdccc5e04b8d60115a27a2d2b6f11dde463c4917d0a99e668f4a909222886cb0" exitCode=143 Nov 29 07:28:01 crc kubenswrapper[4828]: I1129 07:28:01.683432 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerDied","Data":"cdccc5e04b8d60115a27a2d2b6f11dde463c4917d0a99e668f4a909222886cb0"} Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.698185 4828 generic.go:334] "Generic (PLEG): container finished" podID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" containerID="8a2cc258bfc021243f2d3bb4f2682c836f6184b28366a147252f36abd600933a" exitCode=0 Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.698307 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6977eeb3-82a7-42c7-9bae-29b46a93a75e","Type":"ContainerDied","Data":"8a2cc258bfc021243f2d3bb4f2682c836f6184b28366a147252f36abd600933a"} Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.700889 4828 generic.go:334] "Generic (PLEG): container finished" podID="ea97bab7-f379-4317-8a11-6035878e1085" containerID="2854e1989edbc68bdf993a7d80b9433fecd94a524711b37c4567c7d6fb7f4ddc" exitCode=0 Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.701037 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-log" containerID="cri-o://d4810129515d07f3b34a9d033f41496a32d197136bc4ab4ea9583d494120bb72" gracePeriod=30 Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.702337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerDied","Data":"2854e1989edbc68bdf993a7d80b9433fecd94a524711b37c4567c7d6fb7f4ddc"} Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.703063 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.703597 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-api" containerID="cri-o://9bcaf9c09291be6a2e80af2aaa94f5796de5bb521bd640a8598e710478290f71" gracePeriod=30 Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.754301 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=15.754252419 podStartE2EDuration="15.754252419s" podCreationTimestamp="2025-11-29 07:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:02.752147265 +0000 UTC m=+1622.374223323" watchObservedRunningTime="2025-11-29 07:28:02.754252419 +0000 UTC m=+1622.376328477" Nov 29 07:28:02 crc kubenswrapper[4828]: I1129 07:28:02.807571 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.821752599 podStartE2EDuration="16.807542418s" podCreationTimestamp="2025-11-29 07:27:46 +0000 UTC" firstStartedPulling="2025-11-29 07:27:47.285338233 +0000 UTC m=+1606.907414301" lastFinishedPulling="2025-11-29 07:28:00.271128062 +0000 UTC m=+1619.893204120" observedRunningTime="2025-11-29 07:28:02.788754796 +0000 UTC m=+1622.410830854" watchObservedRunningTime="2025-11-29 07:28:02.807542418 +0000 UTC m=+1622.429618476" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.091359 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.201166 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data\") pod \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.201331 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6tpf\" (UniqueName: \"kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf\") pod \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.201407 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle\") pod \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\" (UID: \"6977eeb3-82a7-42c7-9bae-29b46a93a75e\") " Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.209450 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf" (OuterVolumeSpecName: "kube-api-access-r6tpf") pod "6977eeb3-82a7-42c7-9bae-29b46a93a75e" (UID: "6977eeb3-82a7-42c7-9bae-29b46a93a75e"). InnerVolumeSpecName "kube-api-access-r6tpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.234069 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data" (OuterVolumeSpecName: "config-data") pod "6977eeb3-82a7-42c7-9bae-29b46a93a75e" (UID: "6977eeb3-82a7-42c7-9bae-29b46a93a75e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.239810 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6977eeb3-82a7-42c7-9bae-29b46a93a75e" (UID: "6977eeb3-82a7-42c7-9bae-29b46a93a75e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.304069 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.304112 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6tpf\" (UniqueName: \"kubernetes.io/projected/6977eeb3-82a7-42c7-9bae-29b46a93a75e-kube-api-access-r6tpf\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.304124 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977eeb3-82a7-42c7-9bae-29b46a93a75e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.710982 4828 generic.go:334] "Generic (PLEG): container finished" podID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerID="9bcaf9c09291be6a2e80af2aaa94f5796de5bb521bd640a8598e710478290f71" exitCode=0 Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.711235 4828 generic.go:334] "Generic (PLEG): container finished" podID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerID="d4810129515d07f3b34a9d033f41496a32d197136bc4ab4ea9583d494120bb72" exitCode=143 Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.711299 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerDied","Data":"9bcaf9c09291be6a2e80af2aaa94f5796de5bb521bd640a8598e710478290f71"} Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.711332 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerDied","Data":"d4810129515d07f3b34a9d033f41496a32d197136bc4ab4ea9583d494120bb72"} Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.713599 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.713629 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6977eeb3-82a7-42c7-9bae-29b46a93a75e","Type":"ContainerDied","Data":"a0d0de89ae683461dd3298626885d0b065ce570f041206ff00fac913ed0b1326"} Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.713651 4828 scope.go:117] "RemoveContainer" containerID="8a2cc258bfc021243f2d3bb4f2682c836f6184b28366a147252f36abd600933a" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.742965 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.761922 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.773067 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:03 crc kubenswrapper[4828]: E1129 07:28:03.773694 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" containerName="nova-scheduler-scheduler" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.773718 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" containerName="nova-scheduler-scheduler" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.773973 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" containerName="nova-scheduler-scheduler" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.774877 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.777195 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.783731 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.915345 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-config-data\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.915446 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:03 crc kubenswrapper[4828]: I1129 07:28:03.915490 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8r2n\" (UniqueName: \"kubernetes.io/projected/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-kube-api-access-f8r2n\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.018284 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.018380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8r2n\" (UniqueName: \"kubernetes.io/projected/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-kube-api-access-f8r2n\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.018525 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-config-data\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.035259 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-config-data\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.035338 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.037930 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8r2n\" (UniqueName: \"kubernetes.io/projected/7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05-kube-api-access-f8r2n\") pod \"nova-scheduler-0\" (UID: \"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.098700 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.118367 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222414 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222603 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222637 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222744 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnc76\" (UniqueName: \"kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222796 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222816 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data\") pod \"76b4737d-0022-41b0-af94-bb25b892b9e0\" (UID: \"76b4737d-0022-41b0-af94-bb25b892b9e0\") " Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.222826 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs" (OuterVolumeSpecName: "logs") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.223242 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b4737d-0022-41b0-af94-bb25b892b9e0-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.233441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76" (OuterVolumeSpecName: "kube-api-access-wnc76") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "kube-api-access-wnc76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.251926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data" (OuterVolumeSpecName: "config-data") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.253176 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.281053 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.283817 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76b4737d-0022-41b0-af94-bb25b892b9e0" (UID: "76b4737d-0022-41b0-af94-bb25b892b9e0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.324890 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.324928 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.324937 4828 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.324947 4828 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b4737d-0022-41b0-af94-bb25b892b9e0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.324957 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnc76\" (UniqueName: \"kubernetes.io/projected/76b4737d-0022-41b0-af94-bb25b892b9e0-kube-api-access-wnc76\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.611120 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:04 crc kubenswrapper[4828]: E1129 07:28:04.611617 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-log" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.611637 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-log" Nov 29 07:28:04 crc kubenswrapper[4828]: E1129 07:28:04.611652 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-api" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.611661 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-api" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.611905 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-log" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.611924 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" containerName="nova-api-api" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.614434 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.632146 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.730868 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerID="c2cc660a44a511085e10051976c006860703bb63faf27ed5cdb193e07d2a45d2" exitCode=0 Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.731201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerDied","Data":"c2cc660a44a511085e10051976c006860703bb63faf27ed5cdb193e07d2a45d2"} Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.733568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b4737d-0022-41b0-af94-bb25b892b9e0","Type":"ContainerDied","Data":"5edf426f8423cadcdc2a196bda1c3e34661788670f4b635d4c9c4eceaaf73fcc"} Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.733628 4828 scope.go:117] "RemoveContainer" containerID="9bcaf9c09291be6a2e80af2aaa94f5796de5bb521bd640a8598e710478290f71" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.733657 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.735757 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.735933 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.735976 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqswl\" (UniqueName: \"kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.771168 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.787975 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.798617 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.803969 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.813612 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.813682 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.814178 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.814332 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.818140 4828 scope.go:117] "RemoveContainer" containerID="d4810129515d07f3b34a9d033f41496a32d197136bc4ab4ea9583d494120bb72" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.838239 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.838275 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqswl\" (UniqueName: \"kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.838433 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.838729 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.838877 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.912674 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqswl\" (UniqueName: \"kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl\") pod \"certified-operators-5bw99\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.939959 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-public-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.940037 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhhg\" (UniqueName: \"kubernetes.io/projected/e53c7469-46ea-4683-97be-1b872217e983-kube-api-access-bnhhg\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.940131 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e53c7469-46ea-4683-97be-1b872217e983-logs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.940161 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-config-data\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.940245 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.940300 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:04 crc kubenswrapper[4828]: I1129 07:28:04.975659 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.041689 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.042036 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.042083 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-public-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.042113 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhhg\" (UniqueName: \"kubernetes.io/projected/e53c7469-46ea-4683-97be-1b872217e983-kube-api-access-bnhhg\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.042179 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e53c7469-46ea-4683-97be-1b872217e983-logs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.042197 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-config-data\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.043436 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e53c7469-46ea-4683-97be-1b872217e983-logs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.047073 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.048046 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.051086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-config-data\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.051709 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e53c7469-46ea-4683-97be-1b872217e983-public-tls-certs\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.065948 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhhg\" (UniqueName: \"kubernetes.io/projected/e53c7469-46ea-4683-97be-1b872217e983-kube-api-access-bnhhg\") pod \"nova-api-0\" (UID: \"e53c7469-46ea-4683-97be-1b872217e983\") " pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.395701 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.425332 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6977eeb3-82a7-42c7-9bae-29b46a93a75e" path="/var/lib/kubelet/pods/6977eeb3-82a7-42c7-9bae-29b46a93a75e/volumes" Nov 29 07:28:05 crc kubenswrapper[4828]: I1129 07:28:05.425895 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b4737d-0022-41b0-af94-bb25b892b9e0" path="/var/lib/kubelet/pods/76b4737d-0022-41b0-af94-bb25b892b9e0/volumes" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.467646 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.584187 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.708668 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs\") pod \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.708818 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs\") pod \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.708840 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69nz7\" (UniqueName: \"kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7\") pod \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.708916 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle\") pod \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.708945 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data\") pod \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\" (UID: \"5d31dd62-6c7f-4529-8da5-cfb615b653e2\") " Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.709279 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs" (OuterVolumeSpecName: "logs") pod "5d31dd62-6c7f-4529-8da5-cfb615b653e2" (UID: "5d31dd62-6c7f-4529-8da5-cfb615b653e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.709548 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d31dd62-6c7f-4529-8da5-cfb615b653e2-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.746505 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7" (OuterVolumeSpecName: "kube-api-access-69nz7") pod "5d31dd62-6c7f-4529-8da5-cfb615b653e2" (UID: "5d31dd62-6c7f-4529-8da5-cfb615b653e2"). InnerVolumeSpecName "kube-api-access-69nz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.754893 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d31dd62-6c7f-4529-8da5-cfb615b653e2" (UID: "5d31dd62-6c7f-4529-8da5-cfb615b653e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.781791 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5d31dd62-6c7f-4529-8da5-cfb615b653e2" (UID: "5d31dd62-6c7f-4529-8da5-cfb615b653e2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.787145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd","Type":"ContainerStarted","Data":"730b23f2a76aa84982dc2f08b8ef2c37940a37885772b930254b5bdcb47e5714"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.790720 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d31dd62-6c7f-4529-8da5-cfb615b653e2","Type":"ContainerDied","Data":"b030bcee3cc018e37aa5db3b301cc247aefa79e45bce4350724ba13c1cf31790"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.790763 4828 scope.go:117] "RemoveContainer" containerID="c2cc660a44a511085e10051976c006860703bb63faf27ed5cdb193e07d2a45d2" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.790876 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.794173 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data" (OuterVolumeSpecName: "config-data") pod "5d31dd62-6c7f-4529-8da5-cfb615b653e2" (UID: "5d31dd62-6c7f-4529-8da5-cfb615b653e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.810362 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05","Type":"ContainerStarted","Data":"a40639c494d28455de119fa07cf12815dbee0e494e8445c01adbd6da77f33f56"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.814457 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.814488 4828 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.814501 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69nz7\" (UniqueName: \"kubernetes.io/projected/5d31dd62-6c7f-4529-8da5-cfb615b653e2-kube-api-access-69nz7\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:05.814513 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31dd62-6c7f-4529-8da5-cfb615b653e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.009813 4828 scope.go:117] "RemoveContainer" containerID="cdccc5e04b8d60115a27a2d2b6f11dde463c4917d0a99e668f4a909222886cb0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.134622 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.144893 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.161048 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:11 crc kubenswrapper[4828]: E1129 07:28:06.161610 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.161628 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" Nov 29 07:28:11 crc kubenswrapper[4828]: E1129 07:28:06.161646 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.161654 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.161888 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.161920 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.162932 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.165686 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.165888 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.179388 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.288743 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.289027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-config-data\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.289071 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.289153 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/166ca75a-f156-4ce3-9a12-7b76ba38f92e-logs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.289188 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djswv\" (UniqueName: \"kubernetes.io/projected/166ca75a-f156-4ce3-9a12-7b76ba38f92e-kube-api-access-djswv\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.389787 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/166ca75a-f156-4ce3-9a12-7b76ba38f92e-logs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.389836 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djswv\" (UniqueName: \"kubernetes.io/projected/166ca75a-f156-4ce3-9a12-7b76ba38f92e-kube-api-access-djswv\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.389872 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.389897 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-config-data\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.389931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.390910 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/166ca75a-f156-4ce3-9a12-7b76ba38f92e-logs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.407337 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.407506 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-config-data\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.409054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166ca75a-f156-4ce3-9a12-7b76ba38f92e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.412352 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djswv\" (UniqueName: \"kubernetes.io/projected/166ca75a-f156-4ce3-9a12-7b76ba38f92e-kube-api-access-djswv\") pod \"nova-metadata-0\" (UID: \"166ca75a-f156-4ce3-9a12-7b76ba38f92e\") " pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:06.538958 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:07.259764 4828 scope.go:117] "RemoveContainer" containerID="682a70d914a94d3b46cae360223090d55304c98db24f93003e9debe2d196da63" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:07.427017 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" path="/var/lib/kubelet/pods/5d31dd62-6c7f-4529-8da5-cfb615b653e2/volumes" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.441610 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.456984 4828 scope.go:117] "RemoveContainer" containerID="d569c82f140a10382afe21a3ef6873ad2dcc4b1f0f77aeaef12ce23b77df315a" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.509113 4828 scope.go:117] "RemoveContainer" containerID="f30280e3af56f1d0ca9bdb6769fe40b8a6c68f867ea4691813686f5fc2d3cb79" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.569051 4828 scope.go:117] "RemoveContainer" containerID="70b55c114966fbe3c8f47bf771c404e27e29911d9c5f9588692ba92d19002bd0" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.592219 4828 scope.go:117] "RemoveContainer" containerID="e11bd9624f55cc4017804f0f6964bce48f684d2fb0d376ff52f453ba1bd5506b" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.868040 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05","Type":"ContainerStarted","Data":"ca5fcc8e60fc185f5770067fddf4072d7cee71f1a6141de19a0d270d04a5dd9a"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:09.870028 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerStarted","Data":"b7a8788c60eeffeaa90fd352c903870b28fbb208b0228a0a8c85068bcf8c1d07"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:10.454660 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": dial tcp 10.217.0.205:8775: i/o timeout" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:10.454785 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5d31dd62-6c7f-4529-8da5-cfb615b653e2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:10.906956 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=7.906911362 podStartE2EDuration="7.906911362s" podCreationTimestamp="2025-11-29 07:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:10.902831267 +0000 UTC m=+1630.524907335" watchObservedRunningTime="2025-11-29 07:28:10.906911362 +0000 UTC m=+1630.528987410" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.657223 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.660182 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.667991 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.788592 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxsl\" (UniqueName: \"kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.789118 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.789236 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.890599 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.891630 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.892434 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.894819 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrxsl\" (UniqueName: \"kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.894474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.898534 4828 generic.go:334] "Generic (PLEG): container finished" podID="ea97bab7-f379-4317-8a11-6035878e1085" containerID="b7a8788c60eeffeaa90fd352c903870b28fbb208b0228a0a8c85068bcf8c1d07" exitCode=0 Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.898575 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerDied","Data":"b7a8788c60eeffeaa90fd352c903870b28fbb208b0228a0a8c85068bcf8c1d07"} Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.918950 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrxsl\" (UniqueName: \"kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl\") pod \"redhat-marketplace-2ktfg\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:11 crc kubenswrapper[4828]: I1129 07:28:11.986987 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.346380 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.361377 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.373454 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.629420 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:12 crc kubenswrapper[4828]: W1129 07:28:12.632010 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37817a48_526a_4dc4_bec5_55203effe0b0.slice/crio-cc65d00f08a485a1c3e83073dc9ac77386414d6ac21a85b65098e6b00a3b3aa2 WatchSource:0}: Error finding container cc65d00f08a485a1c3e83073dc9ac77386414d6ac21a85b65098e6b00a3b3aa2: Status 404 returned error can't find the container with id cc65d00f08a485a1c3e83073dc9ac77386414d6ac21a85b65098e6b00a3b3aa2 Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.908094 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"166ca75a-f156-4ce3-9a12-7b76ba38f92e","Type":"ContainerStarted","Data":"32f2af37850ceed0bfa1687e38fc003e238542e2bce0693ee4d89f18e8749d0c"} Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.909514 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerStarted","Data":"cc65d00f08a485a1c3e83073dc9ac77386414d6ac21a85b65098e6b00a3b3aa2"} Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.912881 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd","Type":"ContainerStarted","Data":"3522f6f427b28687f4b6a9790ab2777d3f5fc796d2e77013de4f17b52b62d668"} Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.914195 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e53c7469-46ea-4683-97be-1b872217e983","Type":"ContainerStarted","Data":"bd9db223343b9a6849bd959e00e208d17b39eec60a621de6c25b77ae36dc32e0"} Nov 29 07:28:12 crc kubenswrapper[4828]: I1129 07:28:12.915632 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerStarted","Data":"2dd3bbd4e5fff3fa11ab9ce9f79c01b929feb029da46459a984b6466fcdd406a"} Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.412216 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:28:13 crc kubenswrapper[4828]: E1129 07:28:13.412838 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.930394 4828 generic.go:334] "Generic (PLEG): container finished" podID="37817a48-526a-4dc4-bec5-55203effe0b0" containerID="bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8" exitCode=0 Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.930488 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerDied","Data":"bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8"} Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.937583 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"166ca75a-f156-4ce3-9a12-7b76ba38f92e","Type":"ContainerStarted","Data":"dc20cd03a2dbf6093d97a3e423450fa04ae16c4ab83b865c6c4a2f1a8ea6cdce"} Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.942433 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e53c7469-46ea-4683-97be-1b872217e983","Type":"ContainerStarted","Data":"f6df44fff3cdde0358edc6071fcae2516c8e7b368560e94474c6d5f6707df276"} Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.943914 4828 generic.go:334] "Generic (PLEG): container finished" podID="098ba24a-3b08-423e-afd8-deec79080724" containerID="2d976aa08f2ce0d97991bed523a2f8d95a71887a1d3af657bc4c1c721a29fedd" exitCode=0 Nov 29 07:28:13 crc kubenswrapper[4828]: I1129 07:28:13.943959 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerDied","Data":"2d976aa08f2ce0d97991bed523a2f8d95a71887a1d3af657bc4c1c721a29fedd"} Nov 29 07:28:14 crc kubenswrapper[4828]: I1129 07:28:14.100036 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:28:14 crc kubenswrapper[4828]: I1129 07:28:14.101212 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:28:14 crc kubenswrapper[4828]: I1129 07:28:14.130061 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:28:14 crc kubenswrapper[4828]: I1129 07:28:14.976313 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:28:15 crc kubenswrapper[4828]: I1129 07:28:15.975660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"166ca75a-f156-4ce3-9a12-7b76ba38f92e","Type":"ContainerStarted","Data":"cc7bb5cd74512f10ae41a5de30aec90901eedfbbe5f2f3967940e8fb76c5575e"} Nov 29 07:28:15 crc kubenswrapper[4828]: I1129 07:28:15.980458 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e53c7469-46ea-4683-97be-1b872217e983","Type":"ContainerStarted","Data":"42af9ca6a25b012473efdbb8020c1d0a0e35834b26fa3fc361a61dfe54ac7667"} Nov 29 07:28:15 crc kubenswrapper[4828]: I1129 07:28:15.986710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerStarted","Data":"92fce970859f95609f12104271339773df2c6d7dc60f01221d7907e1da5e11a6"} Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.035933 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=12.035908368 podStartE2EDuration="12.035908368s" podCreationTimestamp="2025-11-29 07:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:16.034858231 +0000 UTC m=+1635.656934289" watchObservedRunningTime="2025-11-29 07:28:16.035908368 +0000 UTC m=+1635.657984426" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.048453 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=10.0484292 podStartE2EDuration="10.0484292s" podCreationTimestamp="2025-11-29 07:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:16.000045127 +0000 UTC m=+1635.622121195" watchObservedRunningTime="2025-11-29 07:28:16.0484292 +0000 UTC m=+1635.670505258" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.086899 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8b9sw" podStartSLOduration=4.633898902 podStartE2EDuration="16.086876878s" podCreationTimestamp="2025-11-29 07:28:00 +0000 UTC" firstStartedPulling="2025-11-29 07:28:02.703825864 +0000 UTC m=+1622.325901912" lastFinishedPulling="2025-11-29 07:28:14.15680383 +0000 UTC m=+1633.778879888" observedRunningTime="2025-11-29 07:28:16.079026806 +0000 UTC m=+1635.701102864" watchObservedRunningTime="2025-11-29 07:28:16.086876878 +0000 UTC m=+1635.708952936" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.539233 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.539313 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.539327 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:28:16 crc kubenswrapper[4828]: I1129 07:28:16.539337 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:28:17 crc kubenswrapper[4828]: I1129 07:28:17.004939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8d3ec51-1a59-47fd-96f9-d97022ca7fcd","Type":"ContainerStarted","Data":"43be2bd5e64848a7b7b06cb125689cd726ef80b0289b8aa125d2f96fcb120bc1"} Nov 29 07:28:17 crc kubenswrapper[4828]: I1129 07:28:17.006309 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:28:17 crc kubenswrapper[4828]: I1129 07:28:17.045299 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.819091783 podStartE2EDuration="30.045244061s" podCreationTimestamp="2025-11-29 07:27:47 +0000 UTC" firstStartedPulling="2025-11-29 07:27:48.455697583 +0000 UTC m=+1608.077773631" lastFinishedPulling="2025-11-29 07:28:15.681849851 +0000 UTC m=+1635.303925909" observedRunningTime="2025-11-29 07:28:17.039798001 +0000 UTC m=+1636.661874059" watchObservedRunningTime="2025-11-29 07:28:17.045244061 +0000 UTC m=+1636.667320119" Nov 29 07:28:17 crc kubenswrapper[4828]: I1129 07:28:17.553508 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="166ca75a-f156-4ce3-9a12-7b76ba38f92e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:17 crc kubenswrapper[4828]: I1129 07:28:17.553508 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="166ca75a-f156-4ce3-9a12-7b76ba38f92e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:19 crc kubenswrapper[4828]: I1129 07:28:19.027049 4828 generic.go:334] "Generic (PLEG): container finished" podID="37817a48-526a-4dc4-bec5-55203effe0b0" containerID="9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29" exitCode=0 Nov 29 07:28:19 crc kubenswrapper[4828]: I1129 07:28:19.027308 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerDied","Data":"9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29"} Nov 29 07:28:19 crc kubenswrapper[4828]: I1129 07:28:19.031198 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerStarted","Data":"bd2b2a025c4f460d5b91d84dc2fac46807555b425b66425d84539324c474b776"} Nov 29 07:28:20 crc kubenswrapper[4828]: I1129 07:28:20.041925 4828 generic.go:334] "Generic (PLEG): container finished" podID="098ba24a-3b08-423e-afd8-deec79080724" containerID="bd2b2a025c4f460d5b91d84dc2fac46807555b425b66425d84539324c474b776" exitCode=0 Nov 29 07:28:20 crc kubenswrapper[4828]: I1129 07:28:20.042023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerDied","Data":"bd2b2a025c4f460d5b91d84dc2fac46807555b425b66425d84539324c474b776"} Nov 29 07:28:20 crc kubenswrapper[4828]: I1129 07:28:20.647384 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:20 crc kubenswrapper[4828]: I1129 07:28:20.647785 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:20 crc kubenswrapper[4828]: I1129 07:28:20.699579 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:21 crc kubenswrapper[4828]: I1129 07:28:21.059232 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerStarted","Data":"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee"} Nov 29 07:28:21 crc kubenswrapper[4828]: I1129 07:28:21.079237 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ktfg" podStartSLOduration=4.198796522 podStartE2EDuration="10.079217384s" podCreationTimestamp="2025-11-29 07:28:11 +0000 UTC" firstStartedPulling="2025-11-29 07:28:14.153604078 +0000 UTC m=+1633.775680136" lastFinishedPulling="2025-11-29 07:28:20.03402492 +0000 UTC m=+1639.656100998" observedRunningTime="2025-11-29 07:28:21.075500738 +0000 UTC m=+1640.697576796" watchObservedRunningTime="2025-11-29 07:28:21.079217384 +0000 UTC m=+1640.701293442" Nov 29 07:28:21 crc kubenswrapper[4828]: I1129 07:28:21.143070 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:21 crc kubenswrapper[4828]: I1129 07:28:21.988347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:21 crc kubenswrapper[4828]: I1129 07:28:21.988745 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:22 crc kubenswrapper[4828]: I1129 07:28:22.078959 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerStarted","Data":"642175b3719576d72547fb4e91e5add81624bf28b4aa8526942252cba1300a82"} Nov 29 07:28:22 crc kubenswrapper[4828]: I1129 07:28:22.948548 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5bw99" podStartSLOduration=12.251188599 podStartE2EDuration="18.94852306s" podCreationTimestamp="2025-11-29 07:28:04 +0000 UTC" firstStartedPulling="2025-11-29 07:28:14.154636604 +0000 UTC m=+1633.776712662" lastFinishedPulling="2025-11-29 07:28:20.851971065 +0000 UTC m=+1640.474047123" observedRunningTime="2025-11-29 07:28:22.100695348 +0000 UTC m=+1641.722771406" watchObservedRunningTime="2025-11-29 07:28:22.94852306 +0000 UTC m=+1642.570599128" Nov 29 07:28:22 crc kubenswrapper[4828]: I1129 07:28:22.953995 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:23 crc kubenswrapper[4828]: I1129 07:28:23.040294 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2ktfg" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="registry-server" probeResult="failure" output=< Nov 29 07:28:23 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:28:23 crc kubenswrapper[4828]: > Nov 29 07:28:23 crc kubenswrapper[4828]: I1129 07:28:23.088093 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8b9sw" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="registry-server" containerID="cri-o://92fce970859f95609f12104271339773df2c6d7dc60f01221d7907e1da5e11a6" gracePeriod=2 Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.100188 4828 generic.go:334] "Generic (PLEG): container finished" podID="ea97bab7-f379-4317-8a11-6035878e1085" containerID="92fce970859f95609f12104271339773df2c6d7dc60f01221d7907e1da5e11a6" exitCode=0 Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.100280 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerDied","Data":"92fce970859f95609f12104271339773df2c6d7dc60f01221d7907e1da5e11a6"} Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.413636 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:28:24 crc kubenswrapper[4828]: E1129 07:28:24.413933 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.516668 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.565179 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsfm\" (UniqueName: \"kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm\") pod \"ea97bab7-f379-4317-8a11-6035878e1085\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.565462 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities\") pod \"ea97bab7-f379-4317-8a11-6035878e1085\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.565488 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content\") pod \"ea97bab7-f379-4317-8a11-6035878e1085\" (UID: \"ea97bab7-f379-4317-8a11-6035878e1085\") " Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.566470 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities" (OuterVolumeSpecName: "utilities") pod "ea97bab7-f379-4317-8a11-6035878e1085" (UID: "ea97bab7-f379-4317-8a11-6035878e1085"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.573897 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm" (OuterVolumeSpecName: "kube-api-access-pbsfm") pod "ea97bab7-f379-4317-8a11-6035878e1085" (UID: "ea97bab7-f379-4317-8a11-6035878e1085"). InnerVolumeSpecName "kube-api-access-pbsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.613908 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea97bab7-f379-4317-8a11-6035878e1085" (UID: "ea97bab7-f379-4317-8a11-6035878e1085"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.668035 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.668097 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea97bab7-f379-4317-8a11-6035878e1085-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.668112 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsfm\" (UniqueName: \"kubernetes.io/projected/ea97bab7-f379-4317-8a11-6035878e1085-kube-api-access-pbsfm\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.977204 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:24 crc kubenswrapper[4828]: I1129 07:28:24.977302 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.026241 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.112768 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9sw" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.112837 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9sw" event={"ID":"ea97bab7-f379-4317-8a11-6035878e1085","Type":"ContainerDied","Data":"c1b9941aee660ab23b5f73ebb433e841d2942736d206c073b2b9a68cb9e63bf9"} Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.112878 4828 scope.go:117] "RemoveContainer" containerID="92fce970859f95609f12104271339773df2c6d7dc60f01221d7907e1da5e11a6" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.155817 4828 scope.go:117] "RemoveContainer" containerID="b7a8788c60eeffeaa90fd352c903870b28fbb208b0228a0a8c85068bcf8c1d07" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.167503 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.195430 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8b9sw"] Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.196079 4828 scope.go:117] "RemoveContainer" containerID="2854e1989edbc68bdf993a7d80b9433fecd94a524711b37c4567c7d6fb7f4ddc" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.397407 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.397488 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:28:25 crc kubenswrapper[4828]: I1129 07:28:25.425704 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea97bab7-f379-4317-8a11-6035878e1085" path="/var/lib/kubelet/pods/ea97bab7-f379-4317-8a11-6035878e1085/volumes" Nov 29 07:28:26 crc kubenswrapper[4828]: I1129 07:28:26.413585 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e53c7469-46ea-4683-97be-1b872217e983" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:26 crc kubenswrapper[4828]: I1129 07:28:26.413591 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e53c7469-46ea-4683-97be-1b872217e983" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:26 crc kubenswrapper[4828]: I1129 07:28:26.544775 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:28:26 crc kubenswrapper[4828]: I1129 07:28:26.546256 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:28:26 crc kubenswrapper[4828]: I1129 07:28:26.552960 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:28:27 crc kubenswrapper[4828]: I1129 07:28:27.147723 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:28:32 crc kubenswrapper[4828]: I1129 07:28:32.038913 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:32 crc kubenswrapper[4828]: I1129 07:28:32.096209 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:32 crc kubenswrapper[4828]: I1129 07:28:32.279944 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.201881 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ktfg" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="registry-server" containerID="cri-o://568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee" gracePeriod=2 Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.711945 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.866011 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content\") pod \"37817a48-526a-4dc4-bec5-55203effe0b0\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.866202 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities\") pod \"37817a48-526a-4dc4-bec5-55203effe0b0\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.866425 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrxsl\" (UniqueName: \"kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl\") pod \"37817a48-526a-4dc4-bec5-55203effe0b0\" (UID: \"37817a48-526a-4dc4-bec5-55203effe0b0\") " Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.867778 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities" (OuterVolumeSpecName: "utilities") pod "37817a48-526a-4dc4-bec5-55203effe0b0" (UID: "37817a48-526a-4dc4-bec5-55203effe0b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.877782 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl" (OuterVolumeSpecName: "kube-api-access-zrxsl") pod "37817a48-526a-4dc4-bec5-55203effe0b0" (UID: "37817a48-526a-4dc4-bec5-55203effe0b0"). InnerVolumeSpecName "kube-api-access-zrxsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.882017 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37817a48-526a-4dc4-bec5-55203effe0b0" (UID: "37817a48-526a-4dc4-bec5-55203effe0b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.968969 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrxsl\" (UniqueName: \"kubernetes.io/projected/37817a48-526a-4dc4-bec5-55203effe0b0-kube-api-access-zrxsl\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.969007 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:33 crc kubenswrapper[4828]: I1129 07:28:33.969020 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37817a48-526a-4dc4-bec5-55203effe0b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.213529 4828 generic.go:334] "Generic (PLEG): container finished" podID="37817a48-526a-4dc4-bec5-55203effe0b0" containerID="568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee" exitCode=0 Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.213577 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerDied","Data":"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee"} Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.213597 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ktfg" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.213617 4828 scope.go:117] "RemoveContainer" containerID="568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.213604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ktfg" event={"ID":"37817a48-526a-4dc4-bec5-55203effe0b0","Type":"ContainerDied","Data":"cc65d00f08a485a1c3e83073dc9ac77386414d6ac21a85b65098e6b00a3b3aa2"} Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.244037 4828 scope.go:117] "RemoveContainer" containerID="9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.246840 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.256415 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ktfg"] Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.277978 4828 scope.go:117] "RemoveContainer" containerID="bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.317808 4828 scope.go:117] "RemoveContainer" containerID="568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee" Nov 29 07:28:34 crc kubenswrapper[4828]: E1129 07:28:34.318382 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee\": container with ID starting with 568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee not found: ID does not exist" containerID="568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.318421 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee"} err="failed to get container status \"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee\": rpc error: code = NotFound desc = could not find container \"568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee\": container with ID starting with 568532491cbdc0ee2a43764afc8e2deffb9704c8a51fe510e0a96e8c528658ee not found: ID does not exist" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.318442 4828 scope.go:117] "RemoveContainer" containerID="9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29" Nov 29 07:28:34 crc kubenswrapper[4828]: E1129 07:28:34.318766 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29\": container with ID starting with 9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29 not found: ID does not exist" containerID="9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.318790 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29"} err="failed to get container status \"9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29\": rpc error: code = NotFound desc = could not find container \"9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29\": container with ID starting with 9a4e3cda16b71ed704b96a7fdadb09369fb2b4b67a659388251a4ee61ee92d29 not found: ID does not exist" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.318808 4828 scope.go:117] "RemoveContainer" containerID="bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8" Nov 29 07:28:34 crc kubenswrapper[4828]: E1129 07:28:34.319036 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8\": container with ID starting with bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8 not found: ID does not exist" containerID="bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8" Nov 29 07:28:34 crc kubenswrapper[4828]: I1129 07:28:34.319068 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8"} err="failed to get container status \"bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8\": rpc error: code = NotFound desc = could not find container \"bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8\": container with ID starting with bb111c019d4a4db8062955c4a9e37a017297efe4ab6a756a357964cb73b354d8 not found: ID does not exist" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.025815 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.396653 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.397028 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.403387 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.404844 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4828]: I1129 07:28:35.426212 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" path="/var/lib/kubelet/pods/37817a48-526a-4dc4-bec5-55203effe0b0/volumes" Nov 29 07:28:36 crc kubenswrapper[4828]: I1129 07:28:36.236482 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:28:36 crc kubenswrapper[4828]: I1129 07:28:36.237856 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:28:36 crc kubenswrapper[4828]: I1129 07:28:36.412296 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:28:36 crc kubenswrapper[4828]: E1129 07:28:36.412824 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:28:36 crc kubenswrapper[4828]: I1129 07:28:36.679974 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:36 crc kubenswrapper[4828]: I1129 07:28:36.680310 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5bw99" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="registry-server" containerID="cri-o://642175b3719576d72547fb4e91e5add81624bf28b4aa8526942252cba1300a82" gracePeriod=2 Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.249515 4828 generic.go:334] "Generic (PLEG): container finished" podID="098ba24a-3b08-423e-afd8-deec79080724" containerID="642175b3719576d72547fb4e91e5add81624bf28b4aa8526942252cba1300a82" exitCode=0 Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.250461 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerDied","Data":"642175b3719576d72547fb4e91e5add81624bf28b4aa8526942252cba1300a82"} Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.839091 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.944392 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities\") pod \"098ba24a-3b08-423e-afd8-deec79080724\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.945128 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content\") pod \"098ba24a-3b08-423e-afd8-deec79080724\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.945292 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqswl\" (UniqueName: \"kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl\") pod \"098ba24a-3b08-423e-afd8-deec79080724\" (UID: \"098ba24a-3b08-423e-afd8-deec79080724\") " Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.945427 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities" (OuterVolumeSpecName: "utilities") pod "098ba24a-3b08-423e-afd8-deec79080724" (UID: "098ba24a-3b08-423e-afd8-deec79080724"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.946062 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.956571 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl" (OuterVolumeSpecName: "kube-api-access-zqswl") pod "098ba24a-3b08-423e-afd8-deec79080724" (UID: "098ba24a-3b08-423e-afd8-deec79080724"). InnerVolumeSpecName "kube-api-access-zqswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:37 crc kubenswrapper[4828]: I1129 07:28:37.993197 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "098ba24a-3b08-423e-afd8-deec79080724" (UID: "098ba24a-3b08-423e-afd8-deec79080724"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.050338 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098ba24a-3b08-423e-afd8-deec79080724-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.050722 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqswl\" (UniqueName: \"kubernetes.io/projected/098ba24a-3b08-423e-afd8-deec79080724-kube-api-access-zqswl\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.262025 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bw99" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.262056 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bw99" event={"ID":"098ba24a-3b08-423e-afd8-deec79080724","Type":"ContainerDied","Data":"2dd3bbd4e5fff3fa11ab9ce9f79c01b929feb029da46459a984b6466fcdd406a"} Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.262136 4828 scope.go:117] "RemoveContainer" containerID="642175b3719576d72547fb4e91e5add81624bf28b4aa8526942252cba1300a82" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.287015 4828 scope.go:117] "RemoveContainer" containerID="bd2b2a025c4f460d5b91d84dc2fac46807555b425b66425d84539324c474b776" Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.298851 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.305873 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5bw99"] Nov 29 07:28:38 crc kubenswrapper[4828]: I1129 07:28:38.339093 4828 scope.go:117] "RemoveContainer" containerID="2d976aa08f2ce0d97991bed523a2f8d95a71887a1d3af657bc4c1c721a29fedd" Nov 29 07:28:39 crc kubenswrapper[4828]: I1129 07:28:39.423966 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098ba24a-3b08-423e-afd8-deec79080724" path="/var/lib/kubelet/pods/098ba24a-3b08-423e-afd8-deec79080724/volumes" Nov 29 07:28:47 crc kubenswrapper[4828]: I1129 07:28:47.986133 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:28:51 crc kubenswrapper[4828]: I1129 07:28:51.417998 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:28:51 crc kubenswrapper[4828]: E1129 07:28:51.418944 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:28:59 crc kubenswrapper[4828]: I1129 07:28:59.342918 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:00 crc kubenswrapper[4828]: I1129 07:29:00.407301 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:03 crc kubenswrapper[4828]: I1129 07:29:03.411992 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:29:03 crc kubenswrapper[4828]: E1129 07:29:03.412552 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:29:03 crc kubenswrapper[4828]: I1129 07:29:03.929982 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="rabbitmq" containerID="cri-o://6fd1a0c6e16682cee6ba1e0f5902985866f71e012b89cfbe224a9a750a2cfc86" gracePeriod=604796 Nov 29 07:29:04 crc kubenswrapper[4828]: I1129 07:29:04.871047 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="rabbitmq" containerID="cri-o://e71c12f86a4bc62d322d0dac35e19ea3054ec7117d11ee07d6f011064a993a79" gracePeriod=604796 Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.614594 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.636717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerDied","Data":"6fd1a0c6e16682cee6ba1e0f5902985866f71e012b89cfbe224a9a750a2cfc86"} Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.636659 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerID="6fd1a0c6e16682cee6ba1e0f5902985866f71e012b89cfbe224a9a750a2cfc86" exitCode=0 Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.809373 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.903921 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904188 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904245 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904283 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904354 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904481 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904519 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsc82\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904551 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904595 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904618 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.904655 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info\") pod \"5e6d36a9-09a5-45d6-bae5-89a977408440\" (UID: \"5e6d36a9-09a5-45d6-bae5-89a977408440\") " Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.906198 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.907525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.908777 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.912813 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.915021 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82" (OuterVolumeSpecName: "kube-api-access-dsc82") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "kube-api-access-dsc82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.915603 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.919657 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.929669 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info" (OuterVolumeSpecName: "pod-info") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.980637 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf" (OuterVolumeSpecName: "server-conf") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:10 crc kubenswrapper[4828]: I1129 07:29:10.983706 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data" (OuterVolumeSpecName: "config-data") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007530 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007910 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007924 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007932 4828 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007941 4828 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e6d36a9-09a5-45d6-bae5-89a977408440-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007951 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsc82\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-kube-api-access-dsc82\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007959 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007985 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.007995 4828 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e6d36a9-09a5-45d6-bae5-89a977408440-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.008004 4828 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e6d36a9-09a5-45d6-bae5-89a977408440-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.039082 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.072927 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5e6d36a9-09a5-45d6-bae5-89a977408440" (UID: "5e6d36a9-09a5-45d6-bae5-89a977408440"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.149490 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.149539 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e6d36a9-09a5-45d6-bae5-89a977408440-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:11 crc kubenswrapper[4828]: I1129 07:29:11.647455 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.353337 4828 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.942s" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.353597 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5e6d36a9-09a5-45d6-bae5-89a977408440","Type":"ContainerDied","Data":"128a2b71d52255617957dac1d3543a6829f892722b759505203d6ba5f156019a"} Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.354571 4828 scope.go:117] "RemoveContainer" containerID="6fd1a0c6e16682cee6ba1e0f5902985866f71e012b89cfbe224a9a750a2cfc86" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.378502 4828 scope.go:117] "RemoveContainer" containerID="72b485348990f04a8df44040dbe807689a31c54bd4f558da7c6ae35ad7f0ab45" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.404470 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.426212 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.452435 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.452921 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.452942 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.452957 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="rabbitmq" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.452965 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="rabbitmq" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.452977 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.452983 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.452996 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453002 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453019 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453025 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453040 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="setup-container" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453046 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="setup-container" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453057 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453062 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453089 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453096 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="extract-content" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453111 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453119 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="extract-utilities" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453127 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453136 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: E1129 07:29:13.453149 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453156 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453402 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="37817a48-526a-4dc4-bec5-55203effe0b0" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453426 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="rabbitmq" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453441 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea97bab7-f379-4317-8a11-6035878e1085" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.453454 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="098ba24a-3b08-423e-afd8-deec79080724" containerName="registry-server" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.454564 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.457500 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.457895 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.458055 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zfnnk" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.459692 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.459916 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.460063 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.460211 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.466376 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.603755 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1ae34-ece0-4632-8783-40db599d9ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.603824 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.603876 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.603925 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604024 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1ae34-ece0-4632-8783-40db599d9ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604076 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604156 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604220 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604290 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604339 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5tb7\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-kube-api-access-v5tb7\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.604423 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.705933 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1ae34-ece0-4632-8783-40db599d9ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.705977 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706031 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706072 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706100 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706128 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5tb7\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-kube-api-access-v5tb7\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706219 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1ae34-ece0-4632-8783-40db599d9ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706240 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706283 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706310 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.706519 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.707127 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.707345 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.707416 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.707626 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.708398 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1ae34-ece0-4632-8783-40db599d9ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.712416 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1ae34-ece0-4632-8783-40db599d9ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.712903 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.714226 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.714344 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1ae34-ece0-4632-8783-40db599d9ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.725461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5tb7\" (UniqueName: \"kubernetes.io/projected/dcd1ae34-ece0-4632-8783-40db599d9ec4-kube-api-access-v5tb7\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.751523 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"dcd1ae34-ece0-4632-8783-40db599d9ec4\") " pod="openstack/rabbitmq-server-0" Nov 29 07:29:13 crc kubenswrapper[4828]: I1129 07:29:13.797390 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.286860 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.289133 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.295690 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.337344 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.375401 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.412593 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:29:14 crc kubenswrapper[4828]: E1129 07:29:14.412804 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505722 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505783 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505831 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505860 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505920 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969cx\" (UniqueName: \"kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.505939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.611117 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.611400 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.611740 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969cx\" (UniqueName: \"kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.612036 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.612440 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.612906 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.613830 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.613969 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.614045 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.614554 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.614942 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.616061 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.616931 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.649141 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969cx\" (UniqueName: \"kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx\") pod \"dnsmasq-dns-68df85789f-f2cn7\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.696612 4828 generic.go:334] "Generic (PLEG): container finished" podID="23acf022-f4ef-4a49-8771-e07792440c6c" containerID="e71c12f86a4bc62d322d0dac35e19ea3054ec7117d11ee07d6f011064a993a79" exitCode=0 Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.696704 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerDied","Data":"e71c12f86a4bc62d322d0dac35e19ea3054ec7117d11ee07d6f011064a993a79"} Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.698223 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcd1ae34-ece0-4632-8783-40db599d9ec4","Type":"ContainerStarted","Data":"90ff3688f06924a7fcbf38cf7f1c4ebe1b25205a3afd6a465b532c50b82ca6d4"} Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.914004 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:14 crc kubenswrapper[4828]: I1129 07:29:14.997531 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029522 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029671 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029697 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029736 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029801 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029841 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029927 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrp8p\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.029982 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.030050 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.030082 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.030111 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"23acf022-f4ef-4a49-8771-e07792440c6c\" (UID: \"23acf022-f4ef-4a49-8771-e07792440c6c\") " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.038098 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.038314 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.038804 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.058211 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.091188 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.091631 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info" (OuterVolumeSpecName: "pod-info") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.114437 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.119148 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p" (OuterVolumeSpecName: "kube-api-access-zrp8p") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "kube-api-access-zrp8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132871 4828 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23acf022-f4ef-4a49-8771-e07792440c6c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132906 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132916 4828 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23acf022-f4ef-4a49-8771-e07792440c6c-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132944 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132956 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132964 4828 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132975 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.132983 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrp8p\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-kube-api-access-zrp8p\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.144682 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data" (OuterVolumeSpecName: "config-data") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.182739 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.200828 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf" (OuterVolumeSpecName: "server-conf") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.235243 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.235300 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.235314 4828 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23acf022-f4ef-4a49-8771-e07792440c6c-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.258320 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "23acf022-f4ef-4a49-8771-e07792440c6c" (UID: "23acf022-f4ef-4a49-8771-e07792440c6c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.336638 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23acf022-f4ef-4a49-8771-e07792440c6c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.446540 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" path="/var/lib/kubelet/pods/5e6d36a9-09a5-45d6-bae5-89a977408440/volumes" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.507208 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.591799 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5e6d36a9-09a5-45d6-bae5-89a977408440" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: i/o timeout" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.708416 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" event={"ID":"a210a921-0e55-4f4e-872a-e478e7d893e4","Type":"ContainerStarted","Data":"442c9974a22382a0ba2e478a9ca4141cabacea0eb6d1e66132a806581087d405"} Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.710126 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23acf022-f4ef-4a49-8771-e07792440c6c","Type":"ContainerDied","Data":"3c2bcdecd74c631078ac649e66a993815c91aacad13fd3de075dfcb47053c99b"} Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.710261 4828 scope.go:117] "RemoveContainer" containerID="e71c12f86a4bc62d322d0dac35e19ea3054ec7117d11ee07d6f011064a993a79" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.710194 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.764593 4828 scope.go:117] "RemoveContainer" containerID="2a873c13c2f495a77812fb79e9150e2cc50d93ed2640dc7f8b77038240447f7f" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.796457 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.817904 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.839190 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:15 crc kubenswrapper[4828]: E1129 07:29:15.839806 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="rabbitmq" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.839831 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="rabbitmq" Nov 29 07:29:15 crc kubenswrapper[4828]: E1129 07:29:15.839896 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="setup-container" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.839908 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="setup-container" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.840307 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" containerName="rabbitmq" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.843071 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.846935 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.847054 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.847167 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.847293 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.847345 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x6wkx" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.847453 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.851212 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.855745 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.954602 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.954881 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.954938 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.954957 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.954997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955030 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955065 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955089 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94tf\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-kube-api-access-b94tf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955150 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:15 crc kubenswrapper[4828]: I1129 07:29:15.955175 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072652 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072725 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072780 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072840 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072905 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94tf\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-kube-api-access-b94tf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.072971 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.073008 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.073042 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.075244 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.075338 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.075637 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.075689 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.075966 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.076038 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.077111 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.081143 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.086862 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.087063 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.089249 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.100538 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94tf\" (UniqueName: \"kubernetes.io/projected/2d69c925-6be3-4e39-8aa5-0e27cf8693cb-kube-api-access-b94tf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.119006 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d69c925-6be3-4e39-8aa5-0e27cf8693cb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.188849 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.618862 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:29:16 crc kubenswrapper[4828]: W1129 07:29:16.619893 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d69c925_6be3_4e39_8aa5_0e27cf8693cb.slice/crio-2be5f01f93c6598ecb145a222c1118c5824e7859b883ecf74a7e2e267a8fe656 WatchSource:0}: Error finding container 2be5f01f93c6598ecb145a222c1118c5824e7859b883ecf74a7e2e267a8fe656: Status 404 returned error can't find the container with id 2be5f01f93c6598ecb145a222c1118c5824e7859b883ecf74a7e2e267a8fe656 Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.719543 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcd1ae34-ece0-4632-8783-40db599d9ec4","Type":"ContainerStarted","Data":"bbd40906e6a3bd90d749d3b04b2f79b7175672451c3d52bca2b5a257ef1f9266"} Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.723503 4828 generic.go:334] "Generic (PLEG): container finished" podID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerID="9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30" exitCode=0 Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.723558 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" event={"ID":"a210a921-0e55-4f4e-872a-e478e7d893e4","Type":"ContainerDied","Data":"9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30"} Nov 29 07:29:16 crc kubenswrapper[4828]: I1129 07:29:16.726740 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d69c925-6be3-4e39-8aa5-0e27cf8693cb","Type":"ContainerStarted","Data":"2be5f01f93c6598ecb145a222c1118c5824e7859b883ecf74a7e2e267a8fe656"} Nov 29 07:29:17 crc kubenswrapper[4828]: I1129 07:29:17.429900 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23acf022-f4ef-4a49-8771-e07792440c6c" path="/var/lib/kubelet/pods/23acf022-f4ef-4a49-8771-e07792440c6c/volumes" Nov 29 07:29:17 crc kubenswrapper[4828]: I1129 07:29:17.747049 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" event={"ID":"a210a921-0e55-4f4e-872a-e478e7d893e4","Type":"ContainerStarted","Data":"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed"} Nov 29 07:29:17 crc kubenswrapper[4828]: I1129 07:29:17.747506 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:17 crc kubenswrapper[4828]: I1129 07:29:17.775262 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" podStartSLOduration=3.7752404349999997 podStartE2EDuration="3.775240435s" podCreationTimestamp="2025-11-29 07:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:17.76642556 +0000 UTC m=+1697.388501618" watchObservedRunningTime="2025-11-29 07:29:17.775240435 +0000 UTC m=+1697.397316493" Nov 29 07:29:18 crc kubenswrapper[4828]: I1129 07:29:18.763983 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d69c925-6be3-4e39-8aa5-0e27cf8693cb","Type":"ContainerStarted","Data":"2027bc7fecfe3d896eb69e68d89490a7e62f92b64a36bdddc83270fd8666edcb"} Nov 29 07:29:24 crc kubenswrapper[4828]: I1129 07:29:24.916555 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:24 crc kubenswrapper[4828]: I1129 07:29:24.978988 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:29:24 crc kubenswrapper[4828]: I1129 07:29:24.979475 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="dnsmasq-dns" containerID="cri-o://8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e" gracePeriod=10 Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.196288 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-bffmg"] Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.198304 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.211247 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-bffmg"] Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354695 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-svc\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354802 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354842 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354877 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354908 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.354948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zdr\" (UniqueName: \"kubernetes.io/projected/a60b38bd-cee2-4ea6-840a-828961fde751-kube-api-access-s9zdr\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.355008 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-config\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456416 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-svc\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456538 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456617 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456653 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456684 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zdr\" (UniqueName: \"kubernetes.io/projected/a60b38bd-cee2-4ea6-840a-828961fde751-kube-api-access-s9zdr\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.456753 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-config\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.457650 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.457848 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.457867 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.457870 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-config\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.458265 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-dns-svc\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.458294 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a60b38bd-cee2-4ea6-840a-828961fde751-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.487752 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zdr\" (UniqueName: \"kubernetes.io/projected/a60b38bd-cee2-4ea6-840a-828961fde751-kube-api-access-s9zdr\") pod \"dnsmasq-dns-bb85b8995-bffmg\" (UID: \"a60b38bd-cee2-4ea6-840a-828961fde751\") " pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.526872 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.639449 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763337 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763434 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763523 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763598 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763646 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn66j\" (UniqueName: \"kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.763722 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0\") pod \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\" (UID: \"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d\") " Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.783961 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j" (OuterVolumeSpecName: "kube-api-access-hn66j") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "kube-api-access-hn66j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.833248 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.836229 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.842371 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847084 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerID="8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e" exitCode=0 Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847143 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerDied","Data":"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e"} Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847180 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" event={"ID":"b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d","Type":"ContainerDied","Data":"b06ea8e26e89bb42c545a86602de2b165acee8df770a9e2ca79d19310cee55a0"} Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847202 4828 scope.go:117] "RemoveContainer" containerID="8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847510 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-f5tnw" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.847746 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config" (OuterVolumeSpecName: "config") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.855607 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" (UID: "b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866555 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866902 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866916 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866926 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866936 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn66j\" (UniqueName: \"kubernetes.io/projected/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-kube-api-access-hn66j\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.866947 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.887147 4828 scope.go:117] "RemoveContainer" containerID="7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.927063 4828 scope.go:117] "RemoveContainer" containerID="8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e" Nov 29 07:29:25 crc kubenswrapper[4828]: E1129 07:29:25.934057 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e\": container with ID starting with 8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e not found: ID does not exist" containerID="8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.934125 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e"} err="failed to get container status \"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e\": rpc error: code = NotFound desc = could not find container \"8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e\": container with ID starting with 8152aade310bb4fa7370968a50ece560b1a501fa4cf7c56ea64b84d08f19af8e not found: ID does not exist" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.934167 4828 scope.go:117] "RemoveContainer" containerID="7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349" Nov 29 07:29:25 crc kubenswrapper[4828]: E1129 07:29:25.935046 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349\": container with ID starting with 7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349 not found: ID does not exist" containerID="7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349" Nov 29 07:29:25 crc kubenswrapper[4828]: I1129 07:29:25.935078 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349"} err="failed to get container status \"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349\": rpc error: code = NotFound desc = could not find container \"7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349\": container with ID starting with 7ef6092a36cfb158b024fdbfb1ef72ff850b08ba09bfa7b4ba730edee65b1349 not found: ID does not exist" Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.033861 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-bffmg"] Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.253879 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.264642 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-f5tnw"] Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.412366 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:29:26 crc kubenswrapper[4828]: E1129 07:29:26.413328 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.869690 4828 generic.go:334] "Generic (PLEG): container finished" podID="a60b38bd-cee2-4ea6-840a-828961fde751" containerID="8558469ac52b0035bab029114ed4064a18ac6a093e5446941e3034aba599f6a2" exitCode=0 Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.869766 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" event={"ID":"a60b38bd-cee2-4ea6-840a-828961fde751","Type":"ContainerDied","Data":"8558469ac52b0035bab029114ed4064a18ac6a093e5446941e3034aba599f6a2"} Nov 29 07:29:26 crc kubenswrapper[4828]: I1129 07:29:26.869800 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" event={"ID":"a60b38bd-cee2-4ea6-840a-828961fde751","Type":"ContainerStarted","Data":"d6fcaeae4ed433350d1daa9bf4a9872b7d156a582da49140bd25b6e1ded2c52c"} Nov 29 07:29:27 crc kubenswrapper[4828]: I1129 07:29:27.424788 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" path="/var/lib/kubelet/pods/b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d/volumes" Nov 29 07:29:27 crc kubenswrapper[4828]: I1129 07:29:27.880445 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" event={"ID":"a60b38bd-cee2-4ea6-840a-828961fde751","Type":"ContainerStarted","Data":"ed4c42b96521a100ed5f5e5ebb3d3642ea8e36e4ae218e46adf2dd76dcb936f4"} Nov 29 07:29:27 crc kubenswrapper[4828]: I1129 07:29:27.880622 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:27 crc kubenswrapper[4828]: I1129 07:29:27.902788 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" podStartSLOduration=2.9027728440000002 podStartE2EDuration="2.902772844s" podCreationTimestamp="2025-11-29 07:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:27.901889641 +0000 UTC m=+1707.523965709" watchObservedRunningTime="2025-11-29 07:29:27.902772844 +0000 UTC m=+1707.524848902" Nov 29 07:29:35 crc kubenswrapper[4828]: I1129 07:29:35.529495 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-bffmg" Nov 29 07:29:35 crc kubenswrapper[4828]: I1129 07:29:35.588457 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:35 crc kubenswrapper[4828]: I1129 07:29:35.589060 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="dnsmasq-dns" containerID="cri-o://a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed" gracePeriod=10 Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.907343 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.962709 4828 generic.go:334] "Generic (PLEG): container finished" podID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerID="a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed" exitCode=0 Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.962788 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" event={"ID":"a210a921-0e55-4f4e-872a-e478e7d893e4","Type":"ContainerDied","Data":"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed"} Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.962825 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" event={"ID":"a210a921-0e55-4f4e-872a-e478e7d893e4","Type":"ContainerDied","Data":"442c9974a22382a0ba2e478a9ca4141cabacea0eb6d1e66132a806581087d405"} Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.962847 4828 scope.go:117] "RemoveContainer" containerID="a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed" Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.962970 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-f2cn7" Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983126 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969cx\" (UniqueName: \"kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983224 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983358 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983391 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983471 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983539 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.983618 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc\") pod \"a210a921-0e55-4f4e-872a-e478e7d893e4\" (UID: \"a210a921-0e55-4f4e-872a-e478e7d893e4\") " Nov 29 07:29:36 crc kubenswrapper[4828]: I1129 07:29:36.996533 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx" (OuterVolumeSpecName: "kube-api-access-969cx") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "kube-api-access-969cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.031410 4828 scope.go:117] "RemoveContainer" containerID="9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.047330 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.058813 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.066408 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config" (OuterVolumeSpecName: "config") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.076556 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.085571 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.085605 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.085615 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.085624 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.085633 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969cx\" (UniqueName: \"kubernetes.io/projected/a210a921-0e55-4f4e-872a-e478e7d893e4-kube-api-access-969cx\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.092959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.094131 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a210a921-0e55-4f4e-872a-e478e7d893e4" (UID: "a210a921-0e55-4f4e-872a-e478e7d893e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.187840 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.187887 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a210a921-0e55-4f4e-872a-e478e7d893e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.196521 4828 scope.go:117] "RemoveContainer" containerID="a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed" Nov 29 07:29:37 crc kubenswrapper[4828]: E1129 07:29:37.196979 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed\": container with ID starting with a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed not found: ID does not exist" containerID="a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.197071 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed"} err="failed to get container status \"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed\": rpc error: code = NotFound desc = could not find container \"a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed\": container with ID starting with a76456312a4fbad9960b9ad72b7850bf23f51ff805d0deb0728fafa85d00a6ed not found: ID does not exist" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.197138 4828 scope.go:117] "RemoveContainer" containerID="9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30" Nov 29 07:29:37 crc kubenswrapper[4828]: E1129 07:29:37.197497 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30\": container with ID starting with 9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30 not found: ID does not exist" containerID="9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.197534 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30"} err="failed to get container status \"9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30\": rpc error: code = NotFound desc = could not find container \"9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30\": container with ID starting with 9918a12627eb5d627a59af3458362b8f07d51be9cb48bc381a752b928fe5ec30 not found: ID does not exist" Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.303601 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.314928 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-f2cn7"] Nov 29 07:29:37 crc kubenswrapper[4828]: I1129 07:29:37.428610 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" path="/var/lib/kubelet/pods/a210a921-0e55-4f4e-872a-e478e7d893e4/volumes" Nov 29 07:29:41 crc kubenswrapper[4828]: I1129 07:29:41.421519 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:29:41 crc kubenswrapper[4828]: E1129 07:29:41.421763 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.657582 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks"] Nov 29 07:29:48 crc kubenswrapper[4828]: E1129 07:29:48.658734 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.658752 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: E1129 07:29:48.658783 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.658790 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: E1129 07:29:48.658805 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="init" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.658813 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="init" Nov 29 07:29:48 crc kubenswrapper[4828]: E1129 07:29:48.658836 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="init" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.658844 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="init" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.659050 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a921-0e55-4f4e-872a-e478e7d893e4" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.659073 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b8ff7c-a38d-4dc7-9c96-ef7e50fced7d" containerName="dnsmasq-dns" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.659887 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.662559 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.662587 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.662744 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.664767 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.672475 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks"] Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.726648 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.726786 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.726943 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8tss\" (UniqueName: \"kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.726975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.828674 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8tss\" (UniqueName: \"kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.828733 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.828820 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.828863 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.843716 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.843946 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.843993 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.847920 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8tss\" (UniqueName: \"kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:48 crc kubenswrapper[4828]: I1129 07:29:48.988694 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:29:49 crc kubenswrapper[4828]: I1129 07:29:49.118311 4828 generic.go:334] "Generic (PLEG): container finished" podID="dcd1ae34-ece0-4632-8783-40db599d9ec4" containerID="bbd40906e6a3bd90d749d3b04b2f79b7175672451c3d52bca2b5a257ef1f9266" exitCode=0 Nov 29 07:29:49 crc kubenswrapper[4828]: I1129 07:29:49.118380 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcd1ae34-ece0-4632-8783-40db599d9ec4","Type":"ContainerDied","Data":"bbd40906e6a3bd90d749d3b04b2f79b7175672451c3d52bca2b5a257ef1f9266"} Nov 29 07:29:49 crc kubenswrapper[4828]: W1129 07:29:49.741111 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5d998b_174d_4669_b989_38c40f97ed4b.slice/crio-566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25 WatchSource:0}: Error finding container 566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25: Status 404 returned error can't find the container with id 566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25 Nov 29 07:29:49 crc kubenswrapper[4828]: I1129 07:29:49.743798 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:29:49 crc kubenswrapper[4828]: I1129 07:29:49.747006 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks"] Nov 29 07:29:50 crc kubenswrapper[4828]: I1129 07:29:50.131308 4828 generic.go:334] "Generic (PLEG): container finished" podID="2d69c925-6be3-4e39-8aa5-0e27cf8693cb" containerID="2027bc7fecfe3d896eb69e68d89490a7e62f92b64a36bdddc83270fd8666edcb" exitCode=0 Nov 29 07:29:50 crc kubenswrapper[4828]: I1129 07:29:50.131408 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d69c925-6be3-4e39-8aa5-0e27cf8693cb","Type":"ContainerDied","Data":"2027bc7fecfe3d896eb69e68d89490a7e62f92b64a36bdddc83270fd8666edcb"} Nov 29 07:29:50 crc kubenswrapper[4828]: I1129 07:29:50.134241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcd1ae34-ece0-4632-8783-40db599d9ec4","Type":"ContainerStarted","Data":"0f1628d61930999fe87392ff46104c04f1bacd034585397f6a77b7f5ae0b3961"} Nov 29 07:29:50 crc kubenswrapper[4828]: I1129 07:29:50.134526 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:29:50 crc kubenswrapper[4828]: I1129 07:29:50.136222 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" event={"ID":"fe5d998b-174d-4669-b989-38c40f97ed4b","Type":"ContainerStarted","Data":"566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25"} Nov 29 07:29:51 crc kubenswrapper[4828]: I1129 07:29:51.156922 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d69c925-6be3-4e39-8aa5-0e27cf8693cb","Type":"ContainerStarted","Data":"9b3f1c421a556e6df7f2780263226cd6ef7b5a46ca9a84fc4cc82ef75de9edef"} Nov 29 07:29:51 crc kubenswrapper[4828]: I1129 07:29:51.158329 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:29:51 crc kubenswrapper[4828]: I1129 07:29:51.191449 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.191427545 podStartE2EDuration="38.191427545s" podCreationTimestamp="2025-11-29 07:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:50.183878241 +0000 UTC m=+1729.805954299" watchObservedRunningTime="2025-11-29 07:29:51.191427545 +0000 UTC m=+1730.813503593" Nov 29 07:29:51 crc kubenswrapper[4828]: I1129 07:29:51.192353 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.192344849 podStartE2EDuration="36.192344849s" podCreationTimestamp="2025-11-29 07:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:51.187736821 +0000 UTC m=+1730.809812879" watchObservedRunningTime="2025-11-29 07:29:51.192344849 +0000 UTC m=+1730.814420917" Nov 29 07:29:54 crc kubenswrapper[4828]: I1129 07:29:54.413061 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:29:54 crc kubenswrapper[4828]: E1129 07:29:54.414014 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.153025 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk"] Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.158127 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.162345 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.163707 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.164009 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk"] Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.249153 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.249228 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.249253 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbd84\" (UniqueName: \"kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.351508 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.351573 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbd84\" (UniqueName: \"kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.351784 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.352979 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.367208 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.369739 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbd84\" (UniqueName: \"kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84\") pod \"collect-profiles-29406690-chjbk\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:00 crc kubenswrapper[4828]: I1129 07:30:00.488406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:03 crc kubenswrapper[4828]: I1129 07:30:03.801060 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="dcd1ae34-ece0-4632-8783-40db599d9ec4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.218:5671: connect: connection refused" Nov 29 07:30:05 crc kubenswrapper[4828]: I1129 07:30:05.411880 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:30:05 crc kubenswrapper[4828]: E1129 07:30:05.412391 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:30:06 crc kubenswrapper[4828]: I1129 07:30:06.032885 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk"] Nov 29 07:30:06 crc kubenswrapper[4828]: I1129 07:30:06.191888 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2d69c925-6be3-4e39-8aa5-0e27cf8693cb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.220:5671: connect: connection refused" Nov 29 07:30:06 crc kubenswrapper[4828]: I1129 07:30:06.327860 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" event={"ID":"8b708966-6ad1-4b32-abe6-097320e1b348","Type":"ContainerStarted","Data":"f35c467bafbdd74a20d59406f16249ed3c55c05c5a2506b78241d70e6b408aea"} Nov 29 07:30:06 crc kubenswrapper[4828]: E1129 07:30:06.668929 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 29 07:30:06 crc kubenswrapper[4828]: E1129 07:30:06.669233 4828 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 29 07:30:06 crc kubenswrapper[4828]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 29 07:30:06 crc kubenswrapper[4828]: - hosts: all Nov 29 07:30:06 crc kubenswrapper[4828]: strategy: linear Nov 29 07:30:06 crc kubenswrapper[4828]: tasks: Nov 29 07:30:06 crc kubenswrapper[4828]: - name: Enable podified-repos Nov 29 07:30:06 crc kubenswrapper[4828]: become: true Nov 29 07:30:06 crc kubenswrapper[4828]: ansible.builtin.shell: | Nov 29 07:30:06 crc kubenswrapper[4828]: set -euxo pipefail Nov 29 07:30:06 crc kubenswrapper[4828]: pushd /var/tmp Nov 29 07:30:06 crc kubenswrapper[4828]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 29 07:30:06 crc kubenswrapper[4828]: pushd repo-setup-main Nov 29 07:30:06 crc kubenswrapper[4828]: python3 -m venv ./venv Nov 29 07:30:06 crc kubenswrapper[4828]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 29 07:30:06 crc kubenswrapper[4828]: ./venv/bin/repo-setup current-podified -b antelope Nov 29 07:30:06 crc kubenswrapper[4828]: popd Nov 29 07:30:06 crc kubenswrapper[4828]: rm -rf repo-setup-main Nov 29 07:30:06 crc kubenswrapper[4828]: Nov 29 07:30:06 crc kubenswrapper[4828]: Nov 29 07:30:06 crc kubenswrapper[4828]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 29 07:30:06 crc kubenswrapper[4828]: edpm_override_hosts: openstack-edpm-ipam Nov 29 07:30:06 crc kubenswrapper[4828]: edpm_service_type: repo-setup Nov 29 07:30:06 crc kubenswrapper[4828]: Nov 29 07:30:06 crc kubenswrapper[4828]: Nov 29 07:30:06 crc kubenswrapper[4828]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8tss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks_openstack(fe5d998b-174d-4669-b989-38c40f97ed4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 29 07:30:06 crc kubenswrapper[4828]: > logger="UnhandledError" Nov 29 07:30:06 crc kubenswrapper[4828]: E1129 07:30:06.670422 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" podUID="fe5d998b-174d-4669-b989-38c40f97ed4b" Nov 29 07:30:07 crc kubenswrapper[4828]: I1129 07:30:07.338789 4828 generic.go:334] "Generic (PLEG): container finished" podID="8b708966-6ad1-4b32-abe6-097320e1b348" containerID="0f5059304e2a77966ade9ab64f5326c1c9dec7e20eb0b26278c3d6f928b56de4" exitCode=0 Nov 29 07:30:07 crc kubenswrapper[4828]: I1129 07:30:07.338863 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" event={"ID":"8b708966-6ad1-4b32-abe6-097320e1b348","Type":"ContainerDied","Data":"0f5059304e2a77966ade9ab64f5326c1c9dec7e20eb0b26278c3d6f928b56de4"} Nov 29 07:30:07 crc kubenswrapper[4828]: E1129 07:30:07.340829 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" podUID="fe5d998b-174d-4669-b989-38c40f97ed4b" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.736393 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.812389 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume\") pod \"8b708966-6ad1-4b32-abe6-097320e1b348\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.812824 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume\") pod \"8b708966-6ad1-4b32-abe6-097320e1b348\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.812984 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbd84\" (UniqueName: \"kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84\") pod \"8b708966-6ad1-4b32-abe6-097320e1b348\" (UID: \"8b708966-6ad1-4b32-abe6-097320e1b348\") " Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.814799 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume" (OuterVolumeSpecName: "config-volume") pod "8b708966-6ad1-4b32-abe6-097320e1b348" (UID: "8b708966-6ad1-4b32-abe6-097320e1b348"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.821798 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8b708966-6ad1-4b32-abe6-097320e1b348" (UID: "8b708966-6ad1-4b32-abe6-097320e1b348"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.822390 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84" (OuterVolumeSpecName: "kube-api-access-fbd84") pod "8b708966-6ad1-4b32-abe6-097320e1b348" (UID: "8b708966-6ad1-4b32-abe6-097320e1b348"). InnerVolumeSpecName "kube-api-access-fbd84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.916058 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b708966-6ad1-4b32-abe6-097320e1b348-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.916103 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b708966-6ad1-4b32-abe6-097320e1b348-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:08 crc kubenswrapper[4828]: I1129 07:30:08.916115 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbd84\" (UniqueName: \"kubernetes.io/projected/8b708966-6ad1-4b32-abe6-097320e1b348-kube-api-access-fbd84\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:09 crc kubenswrapper[4828]: I1129 07:30:09.361238 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" event={"ID":"8b708966-6ad1-4b32-abe6-097320e1b348","Type":"ContainerDied","Data":"f35c467bafbdd74a20d59406f16249ed3c55c05c5a2506b78241d70e6b408aea"} Nov 29 07:30:09 crc kubenswrapper[4828]: I1129 07:30:09.361301 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk" Nov 29 07:30:09 crc kubenswrapper[4828]: I1129 07:30:09.361313 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35c467bafbdd74a20d59406f16249ed3c55c05c5a2506b78241d70e6b408aea" Nov 29 07:30:10 crc kubenswrapper[4828]: I1129 07:30:10.239346 4828 scope.go:117] "RemoveContainer" containerID="7badf57f351e8ebdc8d8a1fcbfbcc6605bc40a34d847b791f38a37d9316c4595" Nov 29 07:30:13 crc kubenswrapper[4828]: I1129 07:30:13.801512 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:30:16 crc kubenswrapper[4828]: I1129 07:30:16.191497 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:17 crc kubenswrapper[4828]: I1129 07:30:17.415525 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:30:17 crc kubenswrapper[4828]: E1129 07:30:17.416106 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:30:21 crc kubenswrapper[4828]: I1129 07:30:21.029149 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:30:22 crc kubenswrapper[4828]: I1129 07:30:22.514185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" event={"ID":"fe5d998b-174d-4669-b989-38c40f97ed4b","Type":"ContainerStarted","Data":"0f5eeace4405836ca6b005c2a357d83c5db2d098811334d785bf5bb7a7cf45ba"} Nov 29 07:30:22 crc kubenswrapper[4828]: I1129 07:30:22.531937 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" podStartSLOduration=3.249131383 podStartE2EDuration="34.531898955s" podCreationTimestamp="2025-11-29 07:29:48 +0000 UTC" firstStartedPulling="2025-11-29 07:29:49.743545314 +0000 UTC m=+1729.365621372" lastFinishedPulling="2025-11-29 07:30:21.026312886 +0000 UTC m=+1760.648388944" observedRunningTime="2025-11-29 07:30:22.530370265 +0000 UTC m=+1762.152446343" watchObservedRunningTime="2025-11-29 07:30:22.531898955 +0000 UTC m=+1762.153975013" Nov 29 07:30:30 crc kubenswrapper[4828]: I1129 07:30:30.412008 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:30:30 crc kubenswrapper[4828]: E1129 07:30:30.413842 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:30:39 crc kubenswrapper[4828]: I1129 07:30:39.704777 4828 generic.go:334] "Generic (PLEG): container finished" podID="fe5d998b-174d-4669-b989-38c40f97ed4b" containerID="0f5eeace4405836ca6b005c2a357d83c5db2d098811334d785bf5bb7a7cf45ba" exitCode=0 Nov 29 07:30:39 crc kubenswrapper[4828]: I1129 07:30:39.704912 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" event={"ID":"fe5d998b-174d-4669-b989-38c40f97ed4b","Type":"ContainerDied","Data":"0f5eeace4405836ca6b005c2a357d83c5db2d098811334d785bf5bb7a7cf45ba"} Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.200206 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.312212 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle\") pod \"fe5d998b-174d-4669-b989-38c40f97ed4b\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.312430 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory\") pod \"fe5d998b-174d-4669-b989-38c40f97ed4b\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.313170 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8tss\" (UniqueName: \"kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss\") pod \"fe5d998b-174d-4669-b989-38c40f97ed4b\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.313204 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key\") pod \"fe5d998b-174d-4669-b989-38c40f97ed4b\" (UID: \"fe5d998b-174d-4669-b989-38c40f97ed4b\") " Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.319289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fe5d998b-174d-4669-b989-38c40f97ed4b" (UID: "fe5d998b-174d-4669-b989-38c40f97ed4b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.323626 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss" (OuterVolumeSpecName: "kube-api-access-r8tss") pod "fe5d998b-174d-4669-b989-38c40f97ed4b" (UID: "fe5d998b-174d-4669-b989-38c40f97ed4b"). InnerVolumeSpecName "kube-api-access-r8tss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.346264 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory" (OuterVolumeSpecName: "inventory") pod "fe5d998b-174d-4669-b989-38c40f97ed4b" (UID: "fe5d998b-174d-4669-b989-38c40f97ed4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.350506 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fe5d998b-174d-4669-b989-38c40f97ed4b" (UID: "fe5d998b-174d-4669-b989-38c40f97ed4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.415426 4828 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.415475 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.415552 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8tss\" (UniqueName: \"kubernetes.io/projected/fe5d998b-174d-4669-b989-38c40f97ed4b-kube-api-access-r8tss\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.415565 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe5d998b-174d-4669-b989-38c40f97ed4b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.724333 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" event={"ID":"fe5d998b-174d-4669-b989-38c40f97ed4b","Type":"ContainerDied","Data":"566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25"} Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.724386 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.724396 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="566c5eb6481975bb30dcea682b53d7ceb8f2056605dd1da5fae51b33aeca3e25" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.912740 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw"] Nov 29 07:30:41 crc kubenswrapper[4828]: E1129 07:30:41.913894 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b708966-6ad1-4b32-abe6-097320e1b348" containerName="collect-profiles" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.913934 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b708966-6ad1-4b32-abe6-097320e1b348" containerName="collect-profiles" Nov 29 07:30:41 crc kubenswrapper[4828]: E1129 07:30:41.913971 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5d998b-174d-4669-b989-38c40f97ed4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.913983 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5d998b-174d-4669-b989-38c40f97ed4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.914259 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b708966-6ad1-4b32-abe6-097320e1b348" containerName="collect-profiles" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.914301 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5d998b-174d-4669-b989-38c40f97ed4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.915174 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.917867 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.918108 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.919000 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.920102 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:30:41 crc kubenswrapper[4828]: I1129 07:30:41.934080 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw"] Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.024844 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.024959 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.025033 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6fg\" (UniqueName: \"kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.126417 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.126725 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.127567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv6fg\" (UniqueName: \"kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.130721 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.133318 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.146409 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv6fg\" (UniqueName: \"kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rhzsw\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.234519 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:42 crc kubenswrapper[4828]: W1129 07:30:42.788890 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda091a008_dd3d_4c3f_be97_ac7b35c7c52a.slice/crio-3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3 WatchSource:0}: Error finding container 3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3: Status 404 returned error can't find the container with id 3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3 Nov 29 07:30:42 crc kubenswrapper[4828]: I1129 07:30:42.793976 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw"] Nov 29 07:30:43 crc kubenswrapper[4828]: I1129 07:30:43.412598 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:30:43 crc kubenswrapper[4828]: E1129 07:30:43.413001 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:30:43 crc kubenswrapper[4828]: I1129 07:30:43.744564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" event={"ID":"a091a008-dd3d-4c3f-be97-ac7b35c7c52a","Type":"ContainerStarted","Data":"3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3"} Nov 29 07:30:44 crc kubenswrapper[4828]: I1129 07:30:44.762136 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" event={"ID":"a091a008-dd3d-4c3f-be97-ac7b35c7c52a","Type":"ContainerStarted","Data":"36625ce465dc082857b951bbccba5b7e31e6b263f906187e37387ffdfdcdec09"} Nov 29 07:30:44 crc kubenswrapper[4828]: I1129 07:30:44.795828 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" podStartSLOduration=2.242767126 podStartE2EDuration="3.795771639s" podCreationTimestamp="2025-11-29 07:30:41 +0000 UTC" firstStartedPulling="2025-11-29 07:30:42.794127336 +0000 UTC m=+1782.416203394" lastFinishedPulling="2025-11-29 07:30:44.347131849 +0000 UTC m=+1783.969207907" observedRunningTime="2025-11-29 07:30:44.785016394 +0000 UTC m=+1784.407092452" watchObservedRunningTime="2025-11-29 07:30:44.795771639 +0000 UTC m=+1784.417847697" Nov 29 07:30:47 crc kubenswrapper[4828]: I1129 07:30:47.790327 4828 generic.go:334] "Generic (PLEG): container finished" podID="a091a008-dd3d-4c3f-be97-ac7b35c7c52a" containerID="36625ce465dc082857b951bbccba5b7e31e6b263f906187e37387ffdfdcdec09" exitCode=0 Nov 29 07:30:47 crc kubenswrapper[4828]: I1129 07:30:47.790437 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" event={"ID":"a091a008-dd3d-4c3f-be97-ac7b35c7c52a","Type":"ContainerDied","Data":"36625ce465dc082857b951bbccba5b7e31e6b263f906187e37387ffdfdcdec09"} Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.243322 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.402539 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key\") pod \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.403162 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv6fg\" (UniqueName: \"kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg\") pod \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.403229 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory\") pod \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\" (UID: \"a091a008-dd3d-4c3f-be97-ac7b35c7c52a\") " Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.410468 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg" (OuterVolumeSpecName: "kube-api-access-hv6fg") pod "a091a008-dd3d-4c3f-be97-ac7b35c7c52a" (UID: "a091a008-dd3d-4c3f-be97-ac7b35c7c52a"). InnerVolumeSpecName "kube-api-access-hv6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.432383 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a091a008-dd3d-4c3f-be97-ac7b35c7c52a" (UID: "a091a008-dd3d-4c3f-be97-ac7b35c7c52a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.437291 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory" (OuterVolumeSpecName: "inventory") pod "a091a008-dd3d-4c3f-be97-ac7b35c7c52a" (UID: "a091a008-dd3d-4c3f-be97-ac7b35c7c52a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.505514 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv6fg\" (UniqueName: \"kubernetes.io/projected/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-kube-api-access-hv6fg\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.505559 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.505573 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a091a008-dd3d-4c3f-be97-ac7b35c7c52a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.817452 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" event={"ID":"a091a008-dd3d-4c3f-be97-ac7b35c7c52a","Type":"ContainerDied","Data":"3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3"} Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.817511 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb56f18b4e341e730fcd15418670c3ef272dceeab382487b3c52866e6abcea3" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.817527 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rhzsw" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.892390 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9"] Nov 29 07:30:49 crc kubenswrapper[4828]: E1129 07:30:49.892887 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a091a008-dd3d-4c3f-be97-ac7b35c7c52a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.892909 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a091a008-dd3d-4c3f-be97-ac7b35c7c52a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.893097 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a091a008-dd3d-4c3f-be97-ac7b35c7c52a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.893736 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.896511 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.897286 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.897660 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.900564 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:30:49 crc kubenswrapper[4828]: I1129 07:30:49.906196 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9"] Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.017164 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvkv\" (UniqueName: \"kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.017335 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.017462 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.017536 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.119080 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvkv\" (UniqueName: \"kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.119233 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.119291 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.119315 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.123782 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.124099 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.128565 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.139863 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvkv\" (UniqueName: \"kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.210789 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.815416 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9"] Nov 29 07:30:50 crc kubenswrapper[4828]: I1129 07:30:50.828222 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" event={"ID":"8525375e-b298-4e44-ae0b-9f26a3b1001a","Type":"ContainerStarted","Data":"a1ddd2f5b8f91f4e1f22ae1fbc35f8d697b77ba320d788836ad2150604a9e7e2"} Nov 29 07:30:51 crc kubenswrapper[4828]: I1129 07:30:51.849000 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" event={"ID":"8525375e-b298-4e44-ae0b-9f26a3b1001a","Type":"ContainerStarted","Data":"569531fafb3e96af00912d36981ef5581b32b8e077886624f0e8e1b65f103e0a"} Nov 29 07:30:51 crc kubenswrapper[4828]: I1129 07:30:51.870452 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" podStartSLOduration=2.32090746 podStartE2EDuration="2.870433524s" podCreationTimestamp="2025-11-29 07:30:49 +0000 UTC" firstStartedPulling="2025-11-29 07:30:50.810689183 +0000 UTC m=+1790.432765231" lastFinishedPulling="2025-11-29 07:30:51.360215227 +0000 UTC m=+1790.982291295" observedRunningTime="2025-11-29 07:30:51.86326239 +0000 UTC m=+1791.485338468" watchObservedRunningTime="2025-11-29 07:30:51.870433524 +0000 UTC m=+1791.492509582" Nov 29 07:30:55 crc kubenswrapper[4828]: I1129 07:30:55.412624 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:30:55 crc kubenswrapper[4828]: E1129 07:30:55.413426 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:31:08 crc kubenswrapper[4828]: I1129 07:31:08.412381 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:31:08 crc kubenswrapper[4828]: E1129 07:31:08.413211 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:31:22 crc kubenswrapper[4828]: I1129 07:31:22.411746 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:31:22 crc kubenswrapper[4828]: E1129 07:31:22.412562 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:31:37 crc kubenswrapper[4828]: I1129 07:31:37.412119 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:31:37 crc kubenswrapper[4828]: E1129 07:31:37.412873 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:31:49 crc kubenswrapper[4828]: I1129 07:31:49.411320 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:31:49 crc kubenswrapper[4828]: E1129 07:31:49.412078 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:32:00 crc kubenswrapper[4828]: I1129 07:32:00.412363 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:32:00 crc kubenswrapper[4828]: E1129 07:32:00.413235 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:32:10 crc kubenswrapper[4828]: I1129 07:32:10.466719 4828 scope.go:117] "RemoveContainer" containerID="453218183e4bda76d8abf3244f08ca3767a43dd3388d391f8c33e067ec864666" Nov 29 07:32:10 crc kubenswrapper[4828]: I1129 07:32:10.493444 4828 scope.go:117] "RemoveContainer" containerID="b884a3b25b7e05c18834638576c07c664d8f0cf7eba93a15a19c1d340f8fbe87" Nov 29 07:32:10 crc kubenswrapper[4828]: I1129 07:32:10.516553 4828 scope.go:117] "RemoveContainer" containerID="56e73ae70b3d58618da38bfd87d0d1f57637b929683c888f4da705a9e5d18f42" Nov 29 07:32:10 crc kubenswrapper[4828]: I1129 07:32:10.547422 4828 scope.go:117] "RemoveContainer" containerID="90a72fcd49d483e28014d981981a6d0f2d26d67b6f5b5957e4b234f8f6a88506" Nov 29 07:32:12 crc kubenswrapper[4828]: I1129 07:32:12.412256 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:32:12 crc kubenswrapper[4828]: E1129 07:32:12.412892 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.054495 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dq84z"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.061015 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2f95-account-create-update-b9r9q"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.070994 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4bmqd"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.080440 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2f95-account-create-update-b9r9q"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.089420 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dq84z"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.097576 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4bmqd"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.106448 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c66nk"] Nov 29 07:32:18 crc kubenswrapper[4828]: I1129 07:32:18.113996 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c66nk"] Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.026216 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-58d6-account-create-update-hg569"] Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.039881 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2d11-account-create-update-7mbnk"] Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.048932 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-58d6-account-create-update-hg569"] Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.057837 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2d11-account-create-update-7mbnk"] Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.424468 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8" path="/var/lib/kubelet/pods/2a9cfc4a-a81b-42f3-8ee1-6a97fd9ab4d8/volumes" Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.425366 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55618ab7-858f-49e2-b3ff-259cf7eb69ed" path="/var/lib/kubelet/pods/55618ab7-858f-49e2-b3ff-259cf7eb69ed/volumes" Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.426174 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d92091c-a581-48d8-8e33-8f54e57a03a3" path="/var/lib/kubelet/pods/5d92091c-a581-48d8-8e33-8f54e57a03a3/volumes" Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.426928 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f93d5f3-01f0-4035-8d53-22594f87c388" path="/var/lib/kubelet/pods/5f93d5f3-01f0-4035-8d53-22594f87c388/volumes" Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.428595 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946d34b3-2986-4833-bd08-b898ddd4fcd7" path="/var/lib/kubelet/pods/946d34b3-2986-4833-bd08-b898ddd4fcd7/volumes" Nov 29 07:32:19 crc kubenswrapper[4828]: I1129 07:32:19.429532 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca69608-e449-4f32-b236-6a59faa37c3f" path="/var/lib/kubelet/pods/bca69608-e449-4f32-b236-6a59faa37c3f/volumes" Nov 29 07:32:24 crc kubenswrapper[4828]: I1129 07:32:24.412449 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:32:24 crc kubenswrapper[4828]: E1129 07:32:24.413426 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:32:37 crc kubenswrapper[4828]: I1129 07:32:37.412071 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:32:37 crc kubenswrapper[4828]: E1129 07:32:37.412968 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.045017 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nbs6p"] Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.057216 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-gn86f"] Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.066590 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nbs6p"] Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.076771 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-gn86f"] Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.429784 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178b5736-03a6-439e-b1b8-b123b85d1876" path="/var/lib/kubelet/pods/178b5736-03a6-439e-b1b8-b123b85d1876/volumes" Nov 29 07:32:41 crc kubenswrapper[4828]: I1129 07:32:41.430944 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdeb5e1-cc93-4735-9968-0643cf836b22" path="/var/lib/kubelet/pods/2cdeb5e1-cc93-4735-9968-0643cf836b22/volumes" Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.055183 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6816-account-create-update-m6qkv"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.093580 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zjlgk"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.112436 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bd2rb"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.117359 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6816-account-create-update-m6qkv"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.128691 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bd2rb"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.141087 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-216d-account-create-update-znwgr"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.150299 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1206-account-create-update-gbdkb"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.158368 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zjlgk"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.165885 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1206-account-create-update-gbdkb"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.173402 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2566-account-create-update-m95nq"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.180868 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-216d-account-create-update-znwgr"] Nov 29 07:32:42 crc kubenswrapper[4828]: I1129 07:32:42.187751 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2566-account-create-update-m95nq"] Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.422995 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e2b4f0-bbde-48b4-9c44-12e59b1548b9" path="/var/lib/kubelet/pods/26e2b4f0-bbde-48b4-9c44-12e59b1548b9/volumes" Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.423640 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8" path="/var/lib/kubelet/pods/4c5adfc8-a9ec-4d0b-9c1c-283f77fedfb8/volumes" Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.424170 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927d823f-6545-47a6-b9d6-3437c4f3d493" path="/var/lib/kubelet/pods/927d823f-6545-47a6-b9d6-3437c4f3d493/volumes" Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.424735 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b394d40e-1759-4220-a59f-9d5d90957634" path="/var/lib/kubelet/pods/b394d40e-1759-4220-a59f-9d5d90957634/volumes" Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.425931 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee34a7f9-16ab-4a44-855c-ed865e5d0331" path="/var/lib/kubelet/pods/ee34a7f9-16ab-4a44-855c-ed865e5d0331/volumes" Nov 29 07:32:43 crc kubenswrapper[4828]: I1129 07:32:43.426550 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc59d5d0-a534-49b4-977f-c0c787929ad7" path="/var/lib/kubelet/pods/fc59d5d0-a534-49b4-977f-c0c787929ad7/volumes" Nov 29 07:32:50 crc kubenswrapper[4828]: I1129 07:32:50.412364 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:32:52 crc kubenswrapper[4828]: I1129 07:32:52.197697 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be"} Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.057283 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9lfbf"] Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.065718 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9lfbf"] Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.598839 4828 scope.go:117] "RemoveContainer" containerID="f0dfdc647e462852c4bb506b4b2a2b6dd3764f0a0b45c8e722c325f30782b689" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.621163 4828 scope.go:117] "RemoveContainer" containerID="f264c7ec47625ca59ddd10ab8843e20108222d4a13a9a5ff6c6ee913ffe21e6a" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.650407 4828 scope.go:117] "RemoveContainer" containerID="17e5bfcfc9d65ef62cb3643b7962fe86bf515683d93a08d2bff23b99360bd7f2" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.803552 4828 scope.go:117] "RemoveContainer" containerID="d44685b1055ff18bc37ac5c248c7da04f81535f9a2e58364279f9867de055285" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.824909 4828 scope.go:117] "RemoveContainer" containerID="fde4972dea806434f8795fd8ee837363678bb829dd54701ae495292f27e14ca9" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.883469 4828 scope.go:117] "RemoveContainer" containerID="b8081ca42dc7802062485c3fb6364babee80b13b61ece396234ecb3eea7d3a09" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.939810 4828 scope.go:117] "RemoveContainer" containerID="22ce5b9192da0079d361063506d9bf650a257968c4b10b1ffe8ccb5db31359c9" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.978104 4828 scope.go:117] "RemoveContainer" containerID="25b43cf2a6628b10eaeed71e2a5d11945dcf4ed71829c8d044f334d5acfdb19e" Nov 29 07:33:10 crc kubenswrapper[4828]: I1129 07:33:10.999298 4828 scope.go:117] "RemoveContainer" containerID="9a12d874a7daefe8e253d9173e720323ab02536fdb775d142644267c688c0494" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.025744 4828 scope.go:117] "RemoveContainer" containerID="3be6f09862d654d9f66a07ec86788b24fa8e8595fa0a97c4e029d20bc04ef090" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.047887 4828 scope.go:117] "RemoveContainer" containerID="e1dbf7eec1e0bad3cf1456fdee03d377086998ad799a0bedea4951e9feb62407" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.066354 4828 scope.go:117] "RemoveContainer" containerID="39ba9bd8dce5a86dfb422cdb1d4aecad5a12bd916cf6f4fb5a469a739c2cad21" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.149050 4828 scope.go:117] "RemoveContainer" containerID="ffca92603e4c81546577b9e30b42b6d2d24698cd204c6dab8888909bc818a053" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.169248 4828 scope.go:117] "RemoveContainer" containerID="6d12b73bb4584f32adb25708b3d2f6b76cd12be63e5507401f85ad4f6ad47d87" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.188226 4828 scope.go:117] "RemoveContainer" containerID="51cd7045af13ba1dca7f9fdde7ba2cc089d236ab61f1cab0656124cc2b6929b8" Nov 29 07:33:11 crc kubenswrapper[4828]: I1129 07:33:11.426042 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea442090-ae24-451d-ba14-2d18dbb4076a" path="/var/lib/kubelet/pods/ea442090-ae24-451d-ba14-2d18dbb4076a/volumes" Nov 29 07:34:05 crc kubenswrapper[4828]: I1129 07:34:05.072111 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wc5ng"] Nov 29 07:34:05 crc kubenswrapper[4828]: I1129 07:34:05.083775 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wc5ng"] Nov 29 07:34:05 crc kubenswrapper[4828]: I1129 07:34:05.422600 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2b60cb-6670-4720-8aaf-3db7307905b0" path="/var/lib/kubelet/pods/5e2b60cb-6670-4720-8aaf-3db7307905b0/volumes" Nov 29 07:34:11 crc kubenswrapper[4828]: I1129 07:34:11.476056 4828 scope.go:117] "RemoveContainer" containerID="139ff3ec2d599516a6e51591094162dc09581953895deba778ad2c3d27b6f738" Nov 29 07:34:11 crc kubenswrapper[4828]: I1129 07:34:11.514744 4828 scope.go:117] "RemoveContainer" containerID="cb22272f1c7ebd3421c6cee06ec017b778b971a2311ec3aff754e2f293dd8ee9" Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.059714 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nzmt7"] Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.078106 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t8dd8"] Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.090573 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t8dd8"] Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.100735 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nzmt7"] Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.433659 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786488d0-cd0e-4b05-b8da-dc01f712028c" path="/var/lib/kubelet/pods/786488d0-cd0e-4b05-b8da-dc01f712028c/volumes" Nov 29 07:34:13 crc kubenswrapper[4828]: I1129 07:34:13.435209 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6340ac2-1618-4eab-9dce-47cffd0957b3" path="/var/lib/kubelet/pods/b6340ac2-1618-4eab-9dce-47cffd0957b3/volumes" Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.045642 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-4tb4g"] Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.056031 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dwxw5"] Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.067858 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-4tb4g"] Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.076419 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vphwh"] Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.084399 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vphwh"] Nov 29 07:35:06 crc kubenswrapper[4828]: I1129 07:35:06.092772 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dwxw5"] Nov 29 07:35:07 crc kubenswrapper[4828]: I1129 07:35:07.424604 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3d2548-679c-4c58-8709-a28f3178c1d5" path="/var/lib/kubelet/pods/3d3d2548-679c-4c58-8709-a28f3178c1d5/volumes" Nov 29 07:35:07 crc kubenswrapper[4828]: I1129 07:35:07.425961 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c" path="/var/lib/kubelet/pods/b8b0f537-f6eb-4ee8-ad93-3e3500e2d22c/volumes" Nov 29 07:35:07 crc kubenswrapper[4828]: I1129 07:35:07.426705 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebec231e-52d4-4a47-9391-c57530dc6de4" path="/var/lib/kubelet/pods/ebec231e-52d4-4a47-9391-c57530dc6de4/volumes" Nov 29 07:35:09 crc kubenswrapper[4828]: I1129 07:35:09.757962 4828 generic.go:334] "Generic (PLEG): container finished" podID="8525375e-b298-4e44-ae0b-9f26a3b1001a" containerID="569531fafb3e96af00912d36981ef5581b32b8e077886624f0e8e1b65f103e0a" exitCode=0 Nov 29 07:35:09 crc kubenswrapper[4828]: I1129 07:35:09.758059 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" event={"ID":"8525375e-b298-4e44-ae0b-9f26a3b1001a","Type":"ContainerDied","Data":"569531fafb3e96af00912d36981ef5581b32b8e077886624f0e8e1b65f103e0a"} Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.209770 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.254178 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key\") pod \"8525375e-b298-4e44-ae0b-9f26a3b1001a\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.254514 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory\") pod \"8525375e-b298-4e44-ae0b-9f26a3b1001a\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.254624 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvkv\" (UniqueName: \"kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv\") pod \"8525375e-b298-4e44-ae0b-9f26a3b1001a\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.254680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle\") pod \"8525375e-b298-4e44-ae0b-9f26a3b1001a\" (UID: \"8525375e-b298-4e44-ae0b-9f26a3b1001a\") " Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.260722 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8525375e-b298-4e44-ae0b-9f26a3b1001a" (UID: "8525375e-b298-4e44-ae0b-9f26a3b1001a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.260888 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv" (OuterVolumeSpecName: "kube-api-access-fjvkv") pod "8525375e-b298-4e44-ae0b-9f26a3b1001a" (UID: "8525375e-b298-4e44-ae0b-9f26a3b1001a"). InnerVolumeSpecName "kube-api-access-fjvkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.283432 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8525375e-b298-4e44-ae0b-9f26a3b1001a" (UID: "8525375e-b298-4e44-ae0b-9f26a3b1001a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.287052 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory" (OuterVolumeSpecName: "inventory") pod "8525375e-b298-4e44-ae0b-9f26a3b1001a" (UID: "8525375e-b298-4e44-ae0b-9f26a3b1001a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.357669 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.357707 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.357717 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjvkv\" (UniqueName: \"kubernetes.io/projected/8525375e-b298-4e44-ae0b-9f26a3b1001a-kube-api-access-fjvkv\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.357731 4828 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8525375e-b298-4e44-ae0b-9f26a3b1001a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.487579 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.487966 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.633284 4828 scope.go:117] "RemoveContainer" containerID="da276903bc9bdbb57fb309029afa8bb4ee29f2ec9d725aab9bbe149fbb87f59d" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.671445 4828 scope.go:117] "RemoveContainer" containerID="7cd5cb7120d24918028551e6727f971b48efa1aa85c5494735482808a6365985" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.701681 4828 scope.go:117] "RemoveContainer" containerID="277fcaa2500b14c70f6b46ca7c02783a5a575b2a979c1f55f3d3cc531fa3b0a6" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.730565 4828 scope.go:117] "RemoveContainer" containerID="9f7edfd69e625429b1becd952f48c4aee55552a65674746822db26bfa77810c6" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.759750 4828 scope.go:117] "RemoveContainer" containerID="db6a36a8280d2a912a24e482556690d316cb1450bca2f9da1609125e73d6bbd1" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.778043 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" event={"ID":"8525375e-b298-4e44-ae0b-9f26a3b1001a","Type":"ContainerDied","Data":"a1ddd2f5b8f91f4e1f22ae1fbc35f8d697b77ba320d788836ad2150604a9e7e2"} Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.778116 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ddd2f5b8f91f4e1f22ae1fbc35f8d697b77ba320d788836ad2150604a9e7e2" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.778071 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.872208 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf"] Nov 29 07:35:11 crc kubenswrapper[4828]: E1129 07:35:11.872763 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8525375e-b298-4e44-ae0b-9f26a3b1001a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.872798 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8525375e-b298-4e44-ae0b-9f26a3b1001a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.873071 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8525375e-b298-4e44-ae0b-9f26a3b1001a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.874050 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.877032 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.877502 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.877864 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.879031 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.900113 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf"] Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.967741 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.968027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vhb\" (UniqueName: \"kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:11 crc kubenswrapper[4828]: I1129 07:35:11.968286 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.071024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.071322 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.071473 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vhb\" (UniqueName: \"kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.076105 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.077065 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.088662 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vhb\" (UniqueName: \"kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.197553 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.732144 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf"] Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.733702 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:35:12 crc kubenswrapper[4828]: I1129 07:35:12.797412 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" event={"ID":"85ada4f9-8597-4409-9fc4-7f4dd3594fcf","Type":"ContainerStarted","Data":"7d259ed804c7fd1b5de588181dc00a63b784a15b037d9e95c59ba015f1a9a65f"} Nov 29 07:35:13 crc kubenswrapper[4828]: I1129 07:35:13.809577 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" event={"ID":"85ada4f9-8597-4409-9fc4-7f4dd3594fcf","Type":"ContainerStarted","Data":"1fd0f3d7e91cbccdde6c5bd7cee8f32222de8743ab32f30299feb14a07b21fa4"} Nov 29 07:35:13 crc kubenswrapper[4828]: I1129 07:35:13.838456 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" podStartSLOduration=2.1100750010000002 podStartE2EDuration="2.838385355s" podCreationTimestamp="2025-11-29 07:35:11 +0000 UTC" firstStartedPulling="2025-11-29 07:35:12.73330578 +0000 UTC m=+2052.355381838" lastFinishedPulling="2025-11-29 07:35:13.461616134 +0000 UTC m=+2053.083692192" observedRunningTime="2025-11-29 07:35:13.833336925 +0000 UTC m=+2053.455413003" watchObservedRunningTime="2025-11-29 07:35:13.838385355 +0000 UTC m=+2053.460461413" Nov 29 07:35:41 crc kubenswrapper[4828]: I1129 07:35:41.486660 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:35:41 crc kubenswrapper[4828]: I1129 07:35:41.487306 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.075377 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-05c6-account-create-update-v2dls"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.085754 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kqxf5"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.098011 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1c66-account-create-update-ptpql"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.112556 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mqsbn"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.124707 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8p6dr"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.138246 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kqxf5"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.149199 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-854f-account-create-update-ftz6n"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.163242 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1c66-account-create-update-ptpql"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.175311 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mqsbn"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.186155 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8p6dr"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.200106 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-854f-account-create-update-ftz6n"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.209639 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-05c6-account-create-update-v2dls"] Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.425758 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32592977-0620-41a0-9032-84d6dfeba740" path="/var/lib/kubelet/pods/32592977-0620-41a0-9032-84d6dfeba740/volumes" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.426538 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e37f07-ea33-4cb7-abc1-2bd210005773" path="/var/lib/kubelet/pods/48e37f07-ea33-4cb7-abc1-2bd210005773/volumes" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.427139 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718898d1-9f1d-442b-a581-b388f358f77d" path="/var/lib/kubelet/pods/718898d1-9f1d-442b-a581-b388f358f77d/volumes" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.427842 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d052ca-6f4c-4aa1-a411-da901c59e32e" path="/var/lib/kubelet/pods/96d052ca-6f4c-4aa1-a411-da901c59e32e/volumes" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.429023 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a739da00-650f-46d6-accb-f9e0e93df7af" path="/var/lib/kubelet/pods/a739da00-650f-46d6-accb-f9e0e93df7af/volumes" Nov 29 07:35:45 crc kubenswrapper[4828]: I1129 07:35:45.429747 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd70a089-5326-4b8b-8090-f22b19860d0e" path="/var/lib/kubelet/pods/bd70a089-5326-4b8b-8090-f22b19860d0e/volumes" Nov 29 07:35:50 crc kubenswrapper[4828]: I1129 07:35:50.037908 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mhgs8"] Nov 29 07:35:50 crc kubenswrapper[4828]: I1129 07:35:50.049544 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mhgs8"] Nov 29 07:35:51 crc kubenswrapper[4828]: I1129 07:35:51.424466 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70dc014d-201b-448d-84ba-2c89e7c10855" path="/var/lib/kubelet/pods/70dc014d-201b-448d-84ba-2c89e7c10855/volumes" Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.486977 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.487555 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.487665 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.488474 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.488527 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be" gracePeriod=600 Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.939817 4828 scope.go:117] "RemoveContainer" containerID="e1e506485c1ea7a4452f3107adefc8e0fc18d9f429760a73eeea4e4d544c8455" Nov 29 07:36:11 crc kubenswrapper[4828]: I1129 07:36:11.996637 4828 scope.go:117] "RemoveContainer" containerID="feb4cfe49fdc17e77b9fccf68d8e4f5077e633bfb65cd807125b7913e4f5b568" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.025002 4828 scope.go:117] "RemoveContainer" containerID="449670b16f9313737e61efba55064cc5fac4d157a3d05d0875deccba092c45ef" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.088494 4828 scope.go:117] "RemoveContainer" containerID="6822656aca736aee2151b4eb8e77d3ac2331aa9d8ec05f71cb91e53dfd0ca000" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.135894 4828 scope.go:117] "RemoveContainer" containerID="8cc51868bd398e20ca767b64b4c7ef917e6956bae6af6d64efd8f699f594afe2" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.161677 4828 scope.go:117] "RemoveContainer" containerID="f58dc5a9733beeec6aab550f4750fd641361623783e6a529dcc62c0b17def194" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.183862 4828 scope.go:117] "RemoveContainer" containerID="0290d2dd34604ea94b677a5864222196c85f979ebf71c348dbd1b511e8e0f5e2" Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.433079 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be" exitCode=0 Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.433129 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be"} Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.433151 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130"} Nov 29 07:36:12 crc kubenswrapper[4828]: I1129 07:36:12.433174 4828 scope.go:117] "RemoveContainer" containerID="a45ed786d4384e1575fef34411eb8f2d3d36d9b434459528816d80ebca1f35bd" Nov 29 07:36:24 crc kubenswrapper[4828]: I1129 07:36:24.077115 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdknn"] Nov 29 07:36:24 crc kubenswrapper[4828]: I1129 07:36:24.085770 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdknn"] Nov 29 07:36:25 crc kubenswrapper[4828]: I1129 07:36:25.422691 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33043721-20af-4165-8035-2a4fbe295eb3" path="/var/lib/kubelet/pods/33043721-20af-4165-8035-2a4fbe295eb3/volumes" Nov 29 07:36:50 crc kubenswrapper[4828]: I1129 07:36:50.036434 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8ph8"] Nov 29 07:36:50 crc kubenswrapper[4828]: I1129 07:36:50.046121 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8ph8"] Nov 29 07:36:51 crc kubenswrapper[4828]: I1129 07:36:51.426630 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142" path="/var/lib/kubelet/pods/a2b9e3d0-ce21-4f8f-bf10-ea5a8f36b142/volumes" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.563062 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.565971 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.574482 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.741403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fphrb\" (UniqueName: \"kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.741501 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.741596 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.843560 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fphrb\" (UniqueName: \"kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.843634 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.843726 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.844367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.844547 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.874682 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fphrb\" (UniqueName: \"kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb\") pod \"redhat-operators-v7wpd\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:54 crc kubenswrapper[4828]: I1129 07:36:54.900100 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.048995 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrdgm"] Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.059030 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrdgm"] Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.423968 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b2334c-7b03-45cb-a780-0b40f0bc7bc3" path="/var/lib/kubelet/pods/38b2334c-7b03-45cb-a780-0b40f0bc7bc3/volumes" Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.444984 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.873013 4828 generic.go:334] "Generic (PLEG): container finished" podID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerID="c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254" exitCode=0 Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.873124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerDied","Data":"c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254"} Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.873155 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerStarted","Data":"6932ab3bd7205b44831e79ebc68ec30b968dad6a7f85130f1ba82965bc0aabc9"} Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.874977 4828 generic.go:334] "Generic (PLEG): container finished" podID="85ada4f9-8597-4409-9fc4-7f4dd3594fcf" containerID="1fd0f3d7e91cbccdde6c5bd7cee8f32222de8743ab32f30299feb14a07b21fa4" exitCode=0 Nov 29 07:36:55 crc kubenswrapper[4828]: I1129 07:36:55.875006 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" event={"ID":"85ada4f9-8597-4409-9fc4-7f4dd3594fcf","Type":"ContainerDied","Data":"1fd0f3d7e91cbccdde6c5bd7cee8f32222de8743ab32f30299feb14a07b21fa4"} Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.551024 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.655255 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory\") pod \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.655343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vhb\" (UniqueName: \"kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb\") pod \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.655505 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key\") pod \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\" (UID: \"85ada4f9-8597-4409-9fc4-7f4dd3594fcf\") " Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.661196 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb" (OuterVolumeSpecName: "kube-api-access-v7vhb") pod "85ada4f9-8597-4409-9fc4-7f4dd3594fcf" (UID: "85ada4f9-8597-4409-9fc4-7f4dd3594fcf"). InnerVolumeSpecName "kube-api-access-v7vhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.687778 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85ada4f9-8597-4409-9fc4-7f4dd3594fcf" (UID: "85ada4f9-8597-4409-9fc4-7f4dd3594fcf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.693459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory" (OuterVolumeSpecName: "inventory") pod "85ada4f9-8597-4409-9fc4-7f4dd3594fcf" (UID: "85ada4f9-8597-4409-9fc4-7f4dd3594fcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.758668 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.758710 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7vhb\" (UniqueName: \"kubernetes.io/projected/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-kube-api-access-v7vhb\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.758727 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85ada4f9-8597-4409-9fc4-7f4dd3594fcf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.932350 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" event={"ID":"85ada4f9-8597-4409-9fc4-7f4dd3594fcf","Type":"ContainerDied","Data":"7d259ed804c7fd1b5de588181dc00a63b784a15b037d9e95c59ba015f1a9a65f"} Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.932406 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d259ed804c7fd1b5de588181dc00a63b784a15b037d9e95c59ba015f1a9a65f" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.932474 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf" Nov 29 07:36:57 crc kubenswrapper[4828]: I1129 07:36:57.955050 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerStarted","Data":"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde"} Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.024522 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h"] Nov 29 07:36:58 crc kubenswrapper[4828]: E1129 07:36:58.025096 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ada4f9-8597-4409-9fc4-7f4dd3594fcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.025135 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ada4f9-8597-4409-9fc4-7f4dd3594fcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.025433 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ada4f9-8597-4409-9fc4-7f4dd3594fcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.026207 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.031505 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.031751 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.031867 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.032226 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.056435 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h"] Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.165118 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vth56\" (UniqueName: \"kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.165231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.165380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.266960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vth56\" (UniqueName: \"kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.267029 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.267054 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.272159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.272195 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.285596 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vth56\" (UniqueName: \"kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.349579 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.889012 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h"] Nov 29 07:36:58 crc kubenswrapper[4828]: I1129 07:36:58.964674 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" event={"ID":"eb04df0e-e78b-4441-a2bd-76f7b0262653","Type":"ContainerStarted","Data":"72e8a82fa55d4578b497a6262ab8fb6aa2ea929ea4a3fc2f60ed8dac6d2afc5b"} Nov 29 07:36:59 crc kubenswrapper[4828]: I1129 07:36:59.977258 4828 generic.go:334] "Generic (PLEG): container finished" podID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerID="13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde" exitCode=0 Nov 29 07:36:59 crc kubenswrapper[4828]: I1129 07:36:59.977623 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerDied","Data":"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde"} Nov 29 07:37:01 crc kubenswrapper[4828]: I1129 07:37:01.586902 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:37:02 crc kubenswrapper[4828]: I1129 07:37:02.005831 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" event={"ID":"eb04df0e-e78b-4441-a2bd-76f7b0262653","Type":"ContainerStarted","Data":"ebb3ad38a5115d1e15b4236ecea282c2ea2f6ff2bb6e87afeb9053b57aa11345"} Nov 29 07:37:02 crc kubenswrapper[4828]: I1129 07:37:02.009772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerStarted","Data":"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd"} Nov 29 07:37:02 crc kubenswrapper[4828]: I1129 07:37:02.031563 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" podStartSLOduration=1.3410921980000001 podStartE2EDuration="4.031502578s" podCreationTimestamp="2025-11-29 07:36:58 +0000 UTC" firstStartedPulling="2025-11-29 07:36:58.89396558 +0000 UTC m=+2158.516041638" lastFinishedPulling="2025-11-29 07:37:01.58437596 +0000 UTC m=+2161.206452018" observedRunningTime="2025-11-29 07:37:02.021824859 +0000 UTC m=+2161.643900917" watchObservedRunningTime="2025-11-29 07:37:02.031502578 +0000 UTC m=+2161.653578636" Nov 29 07:37:02 crc kubenswrapper[4828]: I1129 07:37:02.046476 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v7wpd" podStartSLOduration=2.177201983 podStartE2EDuration="8.046456442s" podCreationTimestamp="2025-11-29 07:36:54 +0000 UTC" firstStartedPulling="2025-11-29 07:36:55.876681922 +0000 UTC m=+2155.498757980" lastFinishedPulling="2025-11-29 07:37:01.745936381 +0000 UTC m=+2161.368012439" observedRunningTime="2025-11-29 07:37:02.04172024 +0000 UTC m=+2161.663796308" watchObservedRunningTime="2025-11-29 07:37:02.046456442 +0000 UTC m=+2161.668532500" Nov 29 07:37:04 crc kubenswrapper[4828]: I1129 07:37:04.902304 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:04 crc kubenswrapper[4828]: I1129 07:37:04.902872 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:05 crc kubenswrapper[4828]: I1129 07:37:05.948813 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v7wpd" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="registry-server" probeResult="failure" output=< Nov 29 07:37:05 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:37:05 crc kubenswrapper[4828]: > Nov 29 07:37:12 crc kubenswrapper[4828]: I1129 07:37:12.342856 4828 scope.go:117] "RemoveContainer" containerID="f9109334675860596cda3df54df7d97b62ebe78cb7f57c8b69ca82ccbdbe22ca" Nov 29 07:37:12 crc kubenswrapper[4828]: I1129 07:37:12.398819 4828 scope.go:117] "RemoveContainer" containerID="502d5ee4c39b3cefe8b609992d057b19b7ab830f3c89318e6332746c3f275db8" Nov 29 07:37:12 crc kubenswrapper[4828]: I1129 07:37:12.464074 4828 scope.go:117] "RemoveContainer" containerID="0fcda68522ace4df96adfb4055bd056070a8135b9d1cd76c3c134638f9384f68" Nov 29 07:37:14 crc kubenswrapper[4828]: I1129 07:37:14.962050 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:15 crc kubenswrapper[4828]: I1129 07:37:15.020641 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:15 crc kubenswrapper[4828]: I1129 07:37:15.200697 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.127817 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v7wpd" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="registry-server" containerID="cri-o://cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd" gracePeriod=2 Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.581750 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.629720 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities\") pod \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.629894 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content\") pod \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.629918 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fphrb\" (UniqueName: \"kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb\") pod \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\" (UID: \"5769b23f-f6bf-46ad-a20a-8ab5e92b4035\") " Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.630656 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities" (OuterVolumeSpecName: "utilities") pod "5769b23f-f6bf-46ad-a20a-8ab5e92b4035" (UID: "5769b23f-f6bf-46ad-a20a-8ab5e92b4035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.636806 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb" (OuterVolumeSpecName: "kube-api-access-fphrb") pod "5769b23f-f6bf-46ad-a20a-8ab5e92b4035" (UID: "5769b23f-f6bf-46ad-a20a-8ab5e92b4035"). InnerVolumeSpecName "kube-api-access-fphrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.732349 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fphrb\" (UniqueName: \"kubernetes.io/projected/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-kube-api-access-fphrb\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.732399 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.743754 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5769b23f-f6bf-46ad-a20a-8ab5e92b4035" (UID: "5769b23f-f6bf-46ad-a20a-8ab5e92b4035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:37:16 crc kubenswrapper[4828]: I1129 07:37:16.834774 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5769b23f-f6bf-46ad-a20a-8ab5e92b4035-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.141913 4828 generic.go:334] "Generic (PLEG): container finished" podID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerID="cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd" exitCode=0 Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.141958 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerDied","Data":"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd"} Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.141994 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7wpd" event={"ID":"5769b23f-f6bf-46ad-a20a-8ab5e92b4035","Type":"ContainerDied","Data":"6932ab3bd7205b44831e79ebc68ec30b968dad6a7f85130f1ba82965bc0aabc9"} Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.141997 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7wpd" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.142011 4828 scope.go:117] "RemoveContainer" containerID="cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.178280 4828 scope.go:117] "RemoveContainer" containerID="13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.180719 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.190377 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v7wpd"] Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.205430 4828 scope.go:117] "RemoveContainer" containerID="c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.256671 4828 scope.go:117] "RemoveContainer" containerID="cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd" Nov 29 07:37:17 crc kubenswrapper[4828]: E1129 07:37:17.257256 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd\": container with ID starting with cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd not found: ID does not exist" containerID="cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.257320 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd"} err="failed to get container status \"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd\": rpc error: code = NotFound desc = could not find container \"cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd\": container with ID starting with cc2d77a8dfc50e4a85e1e172300bda0a15cea1214aef38a8ca1933cf1eaebdcd not found: ID does not exist" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.257352 4828 scope.go:117] "RemoveContainer" containerID="13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde" Nov 29 07:37:17 crc kubenswrapper[4828]: E1129 07:37:17.257751 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde\": container with ID starting with 13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde not found: ID does not exist" containerID="13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.257844 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde"} err="failed to get container status \"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde\": rpc error: code = NotFound desc = could not find container \"13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde\": container with ID starting with 13b779b07b4b5cf9bece5b66537f42efde530c15743b506fd211ac198e9b6bde not found: ID does not exist" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.257893 4828 scope.go:117] "RemoveContainer" containerID="c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254" Nov 29 07:37:17 crc kubenswrapper[4828]: E1129 07:37:17.258225 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254\": container with ID starting with c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254 not found: ID does not exist" containerID="c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.258249 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254"} err="failed to get container status \"c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254\": rpc error: code = NotFound desc = could not find container \"c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254\": container with ID starting with c95c6d19a6602311b794aa62154f2d23921221c9bc0922c373dd50e576778254 not found: ID does not exist" Nov 29 07:37:17 crc kubenswrapper[4828]: I1129 07:37:17.430384 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" path="/var/lib/kubelet/pods/5769b23f-f6bf-46ad-a20a-8ab5e92b4035/volumes" Nov 29 07:38:00 crc kubenswrapper[4828]: I1129 07:38:00.037049 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q954t"] Nov 29 07:38:00 crc kubenswrapper[4828]: I1129 07:38:00.044986 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q954t"] Nov 29 07:38:01 crc kubenswrapper[4828]: I1129 07:38:01.435881 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23daa968-b9e7-4bfe-88eb-4aebf6ac37cb" path="/var/lib/kubelet/pods/23daa968-b9e7-4bfe-88eb-4aebf6ac37cb/volumes" Nov 29 07:38:11 crc kubenswrapper[4828]: I1129 07:38:11.487164 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:11 crc kubenswrapper[4828]: I1129 07:38:11.487906 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:12 crc kubenswrapper[4828]: I1129 07:38:12.580457 4828 scope.go:117] "RemoveContainer" containerID="380c995285f836694980cd286ea0bb721e95681a155d99d25178a0de1d731651" Nov 29 07:38:21 crc kubenswrapper[4828]: I1129 07:38:21.797232 4828 generic.go:334] "Generic (PLEG): container finished" podID="eb04df0e-e78b-4441-a2bd-76f7b0262653" containerID="ebb3ad38a5115d1e15b4236ecea282c2ea2f6ff2bb6e87afeb9053b57aa11345" exitCode=0 Nov 29 07:38:21 crc kubenswrapper[4828]: I1129 07:38:21.797322 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" event={"ID":"eb04df0e-e78b-4441-a2bd-76f7b0262653","Type":"ContainerDied","Data":"ebb3ad38a5115d1e15b4236ecea282c2ea2f6ff2bb6e87afeb9053b57aa11345"} Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.230176 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.373700 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory\") pod \"eb04df0e-e78b-4441-a2bd-76f7b0262653\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.373824 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key\") pod \"eb04df0e-e78b-4441-a2bd-76f7b0262653\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.373877 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vth56\" (UniqueName: \"kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56\") pod \"eb04df0e-e78b-4441-a2bd-76f7b0262653\" (UID: \"eb04df0e-e78b-4441-a2bd-76f7b0262653\") " Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.384253 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56" (OuterVolumeSpecName: "kube-api-access-vth56") pod "eb04df0e-e78b-4441-a2bd-76f7b0262653" (UID: "eb04df0e-e78b-4441-a2bd-76f7b0262653"). InnerVolumeSpecName "kube-api-access-vth56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.405616 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb04df0e-e78b-4441-a2bd-76f7b0262653" (UID: "eb04df0e-e78b-4441-a2bd-76f7b0262653"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.422730 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory" (OuterVolumeSpecName: "inventory") pod "eb04df0e-e78b-4441-a2bd-76f7b0262653" (UID: "eb04df0e-e78b-4441-a2bd-76f7b0262653"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.477552 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.477584 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb04df0e-e78b-4441-a2bd-76f7b0262653-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.477594 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vth56\" (UniqueName: \"kubernetes.io/projected/eb04df0e-e78b-4441-a2bd-76f7b0262653-kube-api-access-vth56\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.815377 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" event={"ID":"eb04df0e-e78b-4441-a2bd-76f7b0262653","Type":"ContainerDied","Data":"72e8a82fa55d4578b497a6262ab8fb6aa2ea929ea4a3fc2f60ed8dac6d2afc5b"} Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.815444 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e8a82fa55d4578b497a6262ab8fb6aa2ea929ea4a3fc2f60ed8dac6d2afc5b" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.815447 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.916242 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6"] Nov 29 07:38:23 crc kubenswrapper[4828]: E1129 07:38:23.916807 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="extract-content" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.916845 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="extract-content" Nov 29 07:38:23 crc kubenswrapper[4828]: E1129 07:38:23.916881 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="registry-server" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.916891 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="registry-server" Nov 29 07:38:23 crc kubenswrapper[4828]: E1129 07:38:23.916904 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="extract-utilities" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.916912 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="extract-utilities" Nov 29 07:38:23 crc kubenswrapper[4828]: E1129 07:38:23.916925 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04df0e-e78b-4441-a2bd-76f7b0262653" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.916935 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04df0e-e78b-4441-a2bd-76f7b0262653" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.917212 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5769b23f-f6bf-46ad-a20a-8ab5e92b4035" containerName="registry-server" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.917258 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04df0e-e78b-4441-a2bd-76f7b0262653" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.918099 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.920368 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.923224 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.923740 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.924010 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.931832 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6"] Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.987791 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.988170 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:23 crc kubenswrapper[4828]: I1129 07:38:23.988428 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mmr\" (UniqueName: \"kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.091254 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.091457 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.091691 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mmr\" (UniqueName: \"kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.096377 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.096398 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.109831 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mmr\" (UniqueName: \"kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.244459 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.809931 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6"] Nov 29 07:38:24 crc kubenswrapper[4828]: I1129 07:38:24.824947 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" event={"ID":"f1c81965-17fb-40fe-bc15-a75f50a27eb8","Type":"ContainerStarted","Data":"1d807e5cce2e987c49bb07bcd59d1341c8c765ca9d139a6d862b5d24385dd865"} Nov 29 07:38:25 crc kubenswrapper[4828]: I1129 07:38:25.836509 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" event={"ID":"f1c81965-17fb-40fe-bc15-a75f50a27eb8","Type":"ContainerStarted","Data":"e2d221fb8e8422fd986599591b6db7b569554a877bceaa91d87b7884193220a7"} Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.368189 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" podStartSLOduration=4.913203434 podStartE2EDuration="5.368149703s" podCreationTimestamp="2025-11-29 07:38:23 +0000 UTC" firstStartedPulling="2025-11-29 07:38:24.81789282 +0000 UTC m=+2244.439968888" lastFinishedPulling="2025-11-29 07:38:25.272839099 +0000 UTC m=+2244.894915157" observedRunningTime="2025-11-29 07:38:25.85874976 +0000 UTC m=+2245.480825848" watchObservedRunningTime="2025-11-29 07:38:28.368149703 +0000 UTC m=+2247.990225761" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.375085 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.376917 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.388936 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.393027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltlg\" (UniqueName: \"kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.393348 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.393571 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.495875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.495960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltlg\" (UniqueName: \"kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.496045 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.496529 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.497309 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.523838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltlg\" (UniqueName: \"kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg\") pod \"certified-operators-xltvz\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:28 crc kubenswrapper[4828]: I1129 07:38:28.694980 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:29 crc kubenswrapper[4828]: I1129 07:38:29.254257 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:29 crc kubenswrapper[4828]: W1129 07:38:29.339924 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2382ea61_cbab_408e_bfaa_a9d61897fec0.slice/crio-709e9a6268fde63ce11fe9f39e6feaeb433152bcbecb2458ecb33e0ee56e6e30 WatchSource:0}: Error finding container 709e9a6268fde63ce11fe9f39e6feaeb433152bcbecb2458ecb33e0ee56e6e30: Status 404 returned error can't find the container with id 709e9a6268fde63ce11fe9f39e6feaeb433152bcbecb2458ecb33e0ee56e6e30 Nov 29 07:38:29 crc kubenswrapper[4828]: I1129 07:38:29.887925 4828 generic.go:334] "Generic (PLEG): container finished" podID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerID="15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b" exitCode=0 Nov 29 07:38:29 crc kubenswrapper[4828]: I1129 07:38:29.887974 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerDied","Data":"15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b"} Nov 29 07:38:29 crc kubenswrapper[4828]: I1129 07:38:29.888008 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerStarted","Data":"709e9a6268fde63ce11fe9f39e6feaeb433152bcbecb2458ecb33e0ee56e6e30"} Nov 29 07:38:30 crc kubenswrapper[4828]: I1129 07:38:30.899444 4828 generic.go:334] "Generic (PLEG): container finished" podID="f1c81965-17fb-40fe-bc15-a75f50a27eb8" containerID="e2d221fb8e8422fd986599591b6db7b569554a877bceaa91d87b7884193220a7" exitCode=0 Nov 29 07:38:30 crc kubenswrapper[4828]: I1129 07:38:30.899665 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" event={"ID":"f1c81965-17fb-40fe-bc15-a75f50a27eb8","Type":"ContainerDied","Data":"e2d221fb8e8422fd986599591b6db7b569554a877bceaa91d87b7884193220a7"} Nov 29 07:38:31 crc kubenswrapper[4828]: I1129 07:38:31.913374 4828 generic.go:334] "Generic (PLEG): container finished" podID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerID="63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e" exitCode=0 Nov 29 07:38:31 crc kubenswrapper[4828]: I1129 07:38:31.913469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerDied","Data":"63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e"} Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.359527 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.411578 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory\") pod \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.411846 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key\") pod \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.411921 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6mmr\" (UniqueName: \"kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr\") pod \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\" (UID: \"f1c81965-17fb-40fe-bc15-a75f50a27eb8\") " Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.418509 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr" (OuterVolumeSpecName: "kube-api-access-q6mmr") pod "f1c81965-17fb-40fe-bc15-a75f50a27eb8" (UID: "f1c81965-17fb-40fe-bc15-a75f50a27eb8"). InnerVolumeSpecName "kube-api-access-q6mmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.443346 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1c81965-17fb-40fe-bc15-a75f50a27eb8" (UID: "f1c81965-17fb-40fe-bc15-a75f50a27eb8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.446829 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory" (OuterVolumeSpecName: "inventory") pod "f1c81965-17fb-40fe-bc15-a75f50a27eb8" (UID: "f1c81965-17fb-40fe-bc15-a75f50a27eb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.515036 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.515092 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6mmr\" (UniqueName: \"kubernetes.io/projected/f1c81965-17fb-40fe-bc15-a75f50a27eb8-kube-api-access-q6mmr\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.515104 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c81965-17fb-40fe-bc15-a75f50a27eb8-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.925008 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" event={"ID":"f1c81965-17fb-40fe-bc15-a75f50a27eb8","Type":"ContainerDied","Data":"1d807e5cce2e987c49bb07bcd59d1341c8c765ca9d139a6d862b5d24385dd865"} Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.925250 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d807e5cce2e987c49bb07bcd59d1341c8c765ca9d139a6d862b5d24385dd865" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.925019 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.927640 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerStarted","Data":"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233"} Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.970223 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xltvz" podStartSLOduration=2.301187979 podStartE2EDuration="4.970197134s" podCreationTimestamp="2025-11-29 07:38:28 +0000 UTC" firstStartedPulling="2025-11-29 07:38:29.890342235 +0000 UTC m=+2249.512418293" lastFinishedPulling="2025-11-29 07:38:32.55935139 +0000 UTC m=+2252.181427448" observedRunningTime="2025-11-29 07:38:32.953940069 +0000 UTC m=+2252.576016127" watchObservedRunningTime="2025-11-29 07:38:32.970197134 +0000 UTC m=+2252.592273192" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.994465 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s"] Nov 29 07:38:32 crc kubenswrapper[4828]: E1129 07:38:32.994980 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c81965-17fb-40fe-bc15-a75f50a27eb8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.995009 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c81965-17fb-40fe-bc15-a75f50a27eb8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.995255 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c81965-17fb-40fe-bc15-a75f50a27eb8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.996092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.998646 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.998664 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:38:32 crc kubenswrapper[4828]: I1129 07:38:32.999685 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.000001 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.013203 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s"] Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.126135 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcjf\" (UniqueName: \"kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.126620 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.126980 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.228615 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.228719 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcjf\" (UniqueName: \"kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.228767 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.234635 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.235536 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.247252 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcjf\" (UniqueName: \"kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bj26s\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.312628 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.901648 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s"] Nov 29 07:38:33 crc kubenswrapper[4828]: W1129 07:38:33.904292 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcc2240_c156_4d2b_9500_1bf8015e5733.slice/crio-2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548 WatchSource:0}: Error finding container 2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548: Status 404 returned error can't find the container with id 2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548 Nov 29 07:38:33 crc kubenswrapper[4828]: I1129 07:38:33.937029 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" event={"ID":"ffcc2240-c156-4d2b-9500-1bf8015e5733","Type":"ContainerStarted","Data":"2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548"} Nov 29 07:38:34 crc kubenswrapper[4828]: I1129 07:38:34.946827 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" event={"ID":"ffcc2240-c156-4d2b-9500-1bf8015e5733","Type":"ContainerStarted","Data":"d6ab56090c7f123e06122664cba51e252812126c348775a07ffde381a45d5eab"} Nov 29 07:38:34 crc kubenswrapper[4828]: I1129 07:38:34.972584 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" podStartSLOduration=2.474356765 podStartE2EDuration="2.972560878s" podCreationTimestamp="2025-11-29 07:38:32 +0000 UTC" firstStartedPulling="2025-11-29 07:38:33.907318397 +0000 UTC m=+2253.529394455" lastFinishedPulling="2025-11-29 07:38:34.40552251 +0000 UTC m=+2254.027598568" observedRunningTime="2025-11-29 07:38:34.964728979 +0000 UTC m=+2254.586805047" watchObservedRunningTime="2025-11-29 07:38:34.972560878 +0000 UTC m=+2254.594636936" Nov 29 07:38:38 crc kubenswrapper[4828]: I1129 07:38:38.695743 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:38 crc kubenswrapper[4828]: I1129 07:38:38.696511 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:38 crc kubenswrapper[4828]: I1129 07:38:38.747623 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:39 crc kubenswrapper[4828]: I1129 07:38:39.055189 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:39 crc kubenswrapper[4828]: I1129 07:38:39.106228 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:40 crc kubenswrapper[4828]: I1129 07:38:40.999361 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xltvz" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="registry-server" containerID="cri-o://f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233" gracePeriod=2 Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.487438 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.487806 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.498447 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.557695 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities\") pod \"2382ea61-cbab-408e-bfaa-a9d61897fec0\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.557823 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content\") pod \"2382ea61-cbab-408e-bfaa-a9d61897fec0\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.557981 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xltlg\" (UniqueName: \"kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg\") pod \"2382ea61-cbab-408e-bfaa-a9d61897fec0\" (UID: \"2382ea61-cbab-408e-bfaa-a9d61897fec0\") " Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.560656 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities" (OuterVolumeSpecName: "utilities") pod "2382ea61-cbab-408e-bfaa-a9d61897fec0" (UID: "2382ea61-cbab-408e-bfaa-a9d61897fec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.577373 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg" (OuterVolumeSpecName: "kube-api-access-xltlg") pod "2382ea61-cbab-408e-bfaa-a9d61897fec0" (UID: "2382ea61-cbab-408e-bfaa-a9d61897fec0"). InnerVolumeSpecName "kube-api-access-xltlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.627707 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2382ea61-cbab-408e-bfaa-a9d61897fec0" (UID: "2382ea61-cbab-408e-bfaa-a9d61897fec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.661447 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.661494 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2382ea61-cbab-408e-bfaa-a9d61897fec0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:41 crc kubenswrapper[4828]: I1129 07:38:41.661510 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xltlg\" (UniqueName: \"kubernetes.io/projected/2382ea61-cbab-408e-bfaa-a9d61897fec0-kube-api-access-xltlg\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.011041 4828 generic.go:334] "Generic (PLEG): container finished" podID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerID="f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233" exitCode=0 Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.011089 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xltvz" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.011125 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerDied","Data":"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233"} Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.011560 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xltvz" event={"ID":"2382ea61-cbab-408e-bfaa-a9d61897fec0","Type":"ContainerDied","Data":"709e9a6268fde63ce11fe9f39e6feaeb433152bcbecb2458ecb33e0ee56e6e30"} Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.011609 4828 scope.go:117] "RemoveContainer" containerID="f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.052733 4828 scope.go:117] "RemoveContainer" containerID="63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.063173 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.079169 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xltvz"] Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.092462 4828 scope.go:117] "RemoveContainer" containerID="15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.133777 4828 scope.go:117] "RemoveContainer" containerID="f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233" Nov 29 07:38:42 crc kubenswrapper[4828]: E1129 07:38:42.134447 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233\": container with ID starting with f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233 not found: ID does not exist" containerID="f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.134509 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233"} err="failed to get container status \"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233\": rpc error: code = NotFound desc = could not find container \"f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233\": container with ID starting with f38eb8b87bda1b1d9d79edefbd52a8f221ce9dd8c8fd2bf2b3cb0a4ce088f233 not found: ID does not exist" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.134540 4828 scope.go:117] "RemoveContainer" containerID="63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e" Nov 29 07:38:42 crc kubenswrapper[4828]: E1129 07:38:42.135167 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e\": container with ID starting with 63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e not found: ID does not exist" containerID="63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.135232 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e"} err="failed to get container status \"63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e\": rpc error: code = NotFound desc = could not find container \"63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e\": container with ID starting with 63a0e46ed6275b3a9e04df61e6be69e622bf1501f12b5ec533185442dfa0128e not found: ID does not exist" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.135281 4828 scope.go:117] "RemoveContainer" containerID="15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b" Nov 29 07:38:42 crc kubenswrapper[4828]: E1129 07:38:42.135719 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b\": container with ID starting with 15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b not found: ID does not exist" containerID="15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b" Nov 29 07:38:42 crc kubenswrapper[4828]: I1129 07:38:42.135756 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b"} err="failed to get container status \"15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b\": rpc error: code = NotFound desc = could not find container \"15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b\": container with ID starting with 15c4b1d16da27c2b51e012ee7848c209a9f1f4e73988a05266e4d7c3a887062b not found: ID does not exist" Nov 29 07:38:43 crc kubenswrapper[4828]: I1129 07:38:43.422433 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" path="/var/lib/kubelet/pods/2382ea61-cbab-408e-bfaa-a9d61897fec0/volumes" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.048759 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:08 crc kubenswrapper[4828]: E1129 07:39:08.050004 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="extract-utilities" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.050037 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="extract-utilities" Nov 29 07:39:08 crc kubenswrapper[4828]: E1129 07:39:08.050081 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="extract-content" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.050090 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="extract-content" Nov 29 07:39:08 crc kubenswrapper[4828]: E1129 07:39:08.050110 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="registry-server" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.050119 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="registry-server" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.050419 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2382ea61-cbab-408e-bfaa-a9d61897fec0" containerName="registry-server" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.052215 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.064812 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.246619 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.246815 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.246859 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shd8h\" (UniqueName: \"kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.348712 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.348828 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.348865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shd8h\" (UniqueName: \"kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.349302 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.349383 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.369348 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shd8h\" (UniqueName: \"kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h\") pod \"redhat-marketplace-qhbkl\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.379222 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:08 crc kubenswrapper[4828]: I1129 07:39:08.868993 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:09 crc kubenswrapper[4828]: I1129 07:39:09.275199 4828 generic.go:334] "Generic (PLEG): container finished" podID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerID="e8625ce6bae5719fa27c569b88f91a91651151321ee18b2dd9b7df30df2c88f7" exitCode=0 Nov 29 07:39:09 crc kubenswrapper[4828]: I1129 07:39:09.275495 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerDied","Data":"e8625ce6bae5719fa27c569b88f91a91651151321ee18b2dd9b7df30df2c88f7"} Nov 29 07:39:09 crc kubenswrapper[4828]: I1129 07:39:09.275581 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerStarted","Data":"04c8cc5457c6e5e4accafe1a304725843d2b31b3c8876046b8a5ad51b6ad56d6"} Nov 29 07:39:10 crc kubenswrapper[4828]: I1129 07:39:10.285777 4828 generic.go:334] "Generic (PLEG): container finished" podID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerID="3168813a4199097401c0cb671b1d6d32107f0e21f00fd0cc38a6139b7848d647" exitCode=0 Nov 29 07:39:10 crc kubenswrapper[4828]: I1129 07:39:10.285857 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerDied","Data":"3168813a4199097401c0cb671b1d6d32107f0e21f00fd0cc38a6139b7848d647"} Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.296978 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerStarted","Data":"a370b35c417ec6345c2e0591f5282809dae966797f82b47359a4a6f0bd702f37"} Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.325741 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhbkl" podStartSLOduration=1.873033699 podStartE2EDuration="3.325703517s" podCreationTimestamp="2025-11-29 07:39:08 +0000 UTC" firstStartedPulling="2025-11-29 07:39:09.277369729 +0000 UTC m=+2288.899445787" lastFinishedPulling="2025-11-29 07:39:10.730039547 +0000 UTC m=+2290.352115605" observedRunningTime="2025-11-29 07:39:11.318100033 +0000 UTC m=+2290.940176101" watchObservedRunningTime="2025-11-29 07:39:11.325703517 +0000 UTC m=+2290.947779575" Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.487515 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.487593 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.487656 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.488516 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:39:11 crc kubenswrapper[4828]: I1129 07:39:11.488578 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" gracePeriod=600 Nov 29 07:39:11 crc kubenswrapper[4828]: E1129 07:39:11.672288 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:39:12 crc kubenswrapper[4828]: I1129 07:39:12.309950 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" exitCode=0 Nov 29 07:39:12 crc kubenswrapper[4828]: I1129 07:39:12.310024 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130"} Nov 29 07:39:12 crc kubenswrapper[4828]: I1129 07:39:12.310098 4828 scope.go:117] "RemoveContainer" containerID="3e4b03aa844a4a6319ecb0b1d8c8adf54bb46f868c3fcd41d7078405776727be" Nov 29 07:39:12 crc kubenswrapper[4828]: I1129 07:39:12.310706 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:39:12 crc kubenswrapper[4828]: E1129 07:39:12.311004 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:39:17 crc kubenswrapper[4828]: E1129 07:39:17.052814 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcc2240_c156_4d2b_9500_1bf8015e5733.slice/crio-d6ab56090c7f123e06122664cba51e252812126c348775a07ffde381a45d5eab.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.358050 4828 generic.go:334] "Generic (PLEG): container finished" podID="ffcc2240-c156-4d2b-9500-1bf8015e5733" containerID="d6ab56090c7f123e06122664cba51e252812126c348775a07ffde381a45d5eab" exitCode=0 Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.358143 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" event={"ID":"ffcc2240-c156-4d2b-9500-1bf8015e5733","Type":"ContainerDied","Data":"d6ab56090c7f123e06122664cba51e252812126c348775a07ffde381a45d5eab"} Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.807487 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.810315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.823130 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.950496 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.950590 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:17 crc kubenswrapper[4828]: I1129 07:39:17.950665 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2mz\" (UniqueName: \"kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.051948 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2mz\" (UniqueName: \"kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.052085 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.052139 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.052727 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.052871 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.074736 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2mz\" (UniqueName: \"kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz\") pod \"community-operators-fcmpv\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.139026 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.380179 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.380737 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.450386 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.722457 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:18 crc kubenswrapper[4828]: I1129 07:39:18.904966 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.073732 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwcjf\" (UniqueName: \"kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf\") pod \"ffcc2240-c156-4d2b-9500-1bf8015e5733\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.073831 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory\") pod \"ffcc2240-c156-4d2b-9500-1bf8015e5733\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.073954 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key\") pod \"ffcc2240-c156-4d2b-9500-1bf8015e5733\" (UID: \"ffcc2240-c156-4d2b-9500-1bf8015e5733\") " Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.081147 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf" (OuterVolumeSpecName: "kube-api-access-cwcjf") pod "ffcc2240-c156-4d2b-9500-1bf8015e5733" (UID: "ffcc2240-c156-4d2b-9500-1bf8015e5733"). InnerVolumeSpecName "kube-api-access-cwcjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.108759 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory" (OuterVolumeSpecName: "inventory") pod "ffcc2240-c156-4d2b-9500-1bf8015e5733" (UID: "ffcc2240-c156-4d2b-9500-1bf8015e5733"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.112660 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ffcc2240-c156-4d2b-9500-1bf8015e5733" (UID: "ffcc2240-c156-4d2b-9500-1bf8015e5733"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.177329 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.177391 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwcjf\" (UniqueName: \"kubernetes.io/projected/ffcc2240-c156-4d2b-9500-1bf8015e5733-kube-api-access-cwcjf\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.177407 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffcc2240-c156-4d2b-9500-1bf8015e5733-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.381838 4828 generic.go:334] "Generic (PLEG): container finished" podID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerID="985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee" exitCode=0 Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.381924 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerDied","Data":"985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee"} Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.381958 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerStarted","Data":"588889a724c7fe5c1d69fe79419785472a2711a582fabccee5259e8650d510d1"} Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.384147 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.384201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bj26s" event={"ID":"ffcc2240-c156-4d2b-9500-1bf8015e5733","Type":"ContainerDied","Data":"2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548"} Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.384231 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e3e5b0d74369636a2fe4fd63854721db9f39459cea0f751cd5d28f8a80d1548" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.451427 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.495226 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm"] Nov 29 07:39:19 crc kubenswrapper[4828]: E1129 07:39:19.495748 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcc2240-c156-4d2b-9500-1bf8015e5733" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.495776 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcc2240-c156-4d2b-9500-1bf8015e5733" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.496048 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcc2240-c156-4d2b-9500-1bf8015e5733" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.497596 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.502760 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.502836 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.502996 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.507308 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm"] Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.511732 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.685332 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctqn\" (UniqueName: \"kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.685392 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.685470 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.787026 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pctqn\" (UniqueName: \"kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.787095 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.787153 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.792882 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.794561 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.805404 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctqn\" (UniqueName: \"kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:19 crc kubenswrapper[4828]: I1129 07:39:19.817360 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:39:20 crc kubenswrapper[4828]: I1129 07:39:20.358451 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm"] Nov 29 07:39:20 crc kubenswrapper[4828]: W1129 07:39:20.359961 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55539b0e_2552_4e7c_89f0_e67ae0f38aba.slice/crio-2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59 WatchSource:0}: Error finding container 2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59: Status 404 returned error can't find the container with id 2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59 Nov 29 07:39:20 crc kubenswrapper[4828]: I1129 07:39:20.399300 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" event={"ID":"55539b0e-2552-4e7c-89f0-e67ae0f38aba","Type":"ContainerStarted","Data":"2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59"} Nov 29 07:39:20 crc kubenswrapper[4828]: I1129 07:39:20.808176 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:21 crc kubenswrapper[4828]: I1129 07:39:21.407728 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhbkl" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="registry-server" containerID="cri-o://a370b35c417ec6345c2e0591f5282809dae966797f82b47359a4a6f0bd702f37" gracePeriod=2 Nov 29 07:39:22 crc kubenswrapper[4828]: I1129 07:39:22.420483 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerStarted","Data":"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b"} Nov 29 07:39:22 crc kubenswrapper[4828]: I1129 07:39:22.423652 4828 generic.go:334] "Generic (PLEG): container finished" podID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerID="a370b35c417ec6345c2e0591f5282809dae966797f82b47359a4a6f0bd702f37" exitCode=0 Nov 29 07:39:22 crc kubenswrapper[4828]: I1129 07:39:22.423700 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerDied","Data":"a370b35c417ec6345c2e0591f5282809dae966797f82b47359a4a6f0bd702f37"} Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.056063 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.157703 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities\") pod \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.157917 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shd8h\" (UniqueName: \"kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h\") pod \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.157990 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content\") pod \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\" (UID: \"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8\") " Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.160089 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities" (OuterVolumeSpecName: "utilities") pod "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" (UID: "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.173473 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h" (OuterVolumeSpecName: "kube-api-access-shd8h") pod "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" (UID: "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8"). InnerVolumeSpecName "kube-api-access-shd8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.185852 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" (UID: "a1da7301-4ede-4ba5-ae08-32fbdc1a52b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.260109 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shd8h\" (UniqueName: \"kubernetes.io/projected/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-kube-api-access-shd8h\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.260154 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.260169 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.412257 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:39:23 crc kubenswrapper[4828]: E1129 07:39:23.412682 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.433854 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhbkl" event={"ID":"a1da7301-4ede-4ba5-ae08-32fbdc1a52b8","Type":"ContainerDied","Data":"04c8cc5457c6e5e4accafe1a304725843d2b31b3c8876046b8a5ad51b6ad56d6"} Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.433906 4828 scope.go:117] "RemoveContainer" containerID="a370b35c417ec6345c2e0591f5282809dae966797f82b47359a4a6f0bd702f37" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.434033 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhbkl" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.439158 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" event={"ID":"55539b0e-2552-4e7c-89f0-e67ae0f38aba","Type":"ContainerStarted","Data":"e5840c83acea6812547ad5b862fb39b87a4b97a3f64d8c59ee52b674ec2d3d70"} Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.459573 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" podStartSLOduration=2.904857357 podStartE2EDuration="4.459551388s" podCreationTimestamp="2025-11-29 07:39:19 +0000 UTC" firstStartedPulling="2025-11-29 07:39:20.372382542 +0000 UTC m=+2299.994458600" lastFinishedPulling="2025-11-29 07:39:21.927076573 +0000 UTC m=+2301.549152631" observedRunningTime="2025-11-29 07:39:23.457339802 +0000 UTC m=+2303.079415860" watchObservedRunningTime="2025-11-29 07:39:23.459551388 +0000 UTC m=+2303.081627446" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.465182 4828 scope.go:117] "RemoveContainer" containerID="3168813a4199097401c0cb671b1d6d32107f0e21f00fd0cc38a6139b7848d647" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.481513 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.490456 4828 scope.go:117] "RemoveContainer" containerID="e8625ce6bae5719fa27c569b88f91a91651151321ee18b2dd9b7df30df2c88f7" Nov 29 07:39:23 crc kubenswrapper[4828]: I1129 07:39:23.491120 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhbkl"] Nov 29 07:39:24 crc kubenswrapper[4828]: I1129 07:39:24.453867 4828 generic.go:334] "Generic (PLEG): container finished" podID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerID="fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b" exitCode=0 Nov 29 07:39:24 crc kubenswrapper[4828]: I1129 07:39:24.453998 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerDied","Data":"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b"} Nov 29 07:39:25 crc kubenswrapper[4828]: I1129 07:39:25.427581 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" path="/var/lib/kubelet/pods/a1da7301-4ede-4ba5-ae08-32fbdc1a52b8/volumes" Nov 29 07:39:25 crc kubenswrapper[4828]: I1129 07:39:25.469087 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerStarted","Data":"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c"} Nov 29 07:39:25 crc kubenswrapper[4828]: I1129 07:39:25.492090 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fcmpv" podStartSLOduration=2.954787826 podStartE2EDuration="8.492064802s" podCreationTimestamp="2025-11-29 07:39:17 +0000 UTC" firstStartedPulling="2025-11-29 07:39:19.386847714 +0000 UTC m=+2299.008923772" lastFinishedPulling="2025-11-29 07:39:24.92412469 +0000 UTC m=+2304.546200748" observedRunningTime="2025-11-29 07:39:25.489964408 +0000 UTC m=+2305.112040466" watchObservedRunningTime="2025-11-29 07:39:25.492064802 +0000 UTC m=+2305.114140860" Nov 29 07:39:28 crc kubenswrapper[4828]: I1129 07:39:28.148292 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:28 crc kubenswrapper[4828]: I1129 07:39:28.149037 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:28 crc kubenswrapper[4828]: I1129 07:39:28.214081 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:35 crc kubenswrapper[4828]: I1129 07:39:35.413930 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:39:35 crc kubenswrapper[4828]: E1129 07:39:35.414851 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:39:38 crc kubenswrapper[4828]: I1129 07:39:38.190075 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:38 crc kubenswrapper[4828]: I1129 07:39:38.239223 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:38 crc kubenswrapper[4828]: I1129 07:39:38.589749 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fcmpv" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="registry-server" containerID="cri-o://349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c" gracePeriod=2 Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.568065 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.590264 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities\") pod \"5c90a227-696d-4787-9eb9-ff2e61a4888c\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.590394 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb2mz\" (UniqueName: \"kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz\") pod \"5c90a227-696d-4787-9eb9-ff2e61a4888c\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.590427 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content\") pod \"5c90a227-696d-4787-9eb9-ff2e61a4888c\" (UID: \"5c90a227-696d-4787-9eb9-ff2e61a4888c\") " Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.591812 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities" (OuterVolumeSpecName: "utilities") pod "5c90a227-696d-4787-9eb9-ff2e61a4888c" (UID: "5c90a227-696d-4787-9eb9-ff2e61a4888c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.599719 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz" (OuterVolumeSpecName: "kube-api-access-cb2mz") pod "5c90a227-696d-4787-9eb9-ff2e61a4888c" (UID: "5c90a227-696d-4787-9eb9-ff2e61a4888c"). InnerVolumeSpecName "kube-api-access-cb2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.615443 4828 generic.go:334] "Generic (PLEG): container finished" podID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerID="349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c" exitCode=0 Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.615832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerDied","Data":"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c"} Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.616051 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmpv" event={"ID":"5c90a227-696d-4787-9eb9-ff2e61a4888c","Type":"ContainerDied","Data":"588889a724c7fe5c1d69fe79419785472a2711a582fabccee5259e8650d510d1"} Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.616232 4828 scope.go:117] "RemoveContainer" containerID="349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.616683 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmpv" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.659970 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c90a227-696d-4787-9eb9-ff2e61a4888c" (UID: "5c90a227-696d-4787-9eb9-ff2e61a4888c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.667865 4828 scope.go:117] "RemoveContainer" containerID="fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.694136 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.694183 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb2mz\" (UniqueName: \"kubernetes.io/projected/5c90a227-696d-4787-9eb9-ff2e61a4888c-kube-api-access-cb2mz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.694200 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c90a227-696d-4787-9eb9-ff2e61a4888c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.706785 4828 scope.go:117] "RemoveContainer" containerID="985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.755043 4828 scope.go:117] "RemoveContainer" containerID="349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c" Nov 29 07:39:39 crc kubenswrapper[4828]: E1129 07:39:39.755685 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c\": container with ID starting with 349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c not found: ID does not exist" containerID="349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.755733 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c"} err="failed to get container status \"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c\": rpc error: code = NotFound desc = could not find container \"349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c\": container with ID starting with 349c48fecccbc5914fb47e10453959b4f1070b5c8ee6cee3f217d6d17018ba8c not found: ID does not exist" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.755765 4828 scope.go:117] "RemoveContainer" containerID="fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b" Nov 29 07:39:39 crc kubenswrapper[4828]: E1129 07:39:39.756152 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b\": container with ID starting with fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b not found: ID does not exist" containerID="fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.756195 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b"} err="failed to get container status \"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b\": rpc error: code = NotFound desc = could not find container \"fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b\": container with ID starting with fcbd2e70ae4cd8932f65e58f77496d13315b7b22d30313ce7c90c7bb8e71072b not found: ID does not exist" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.756232 4828 scope.go:117] "RemoveContainer" containerID="985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee" Nov 29 07:39:39 crc kubenswrapper[4828]: E1129 07:39:39.756583 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee\": container with ID starting with 985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee not found: ID does not exist" containerID="985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.756642 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee"} err="failed to get container status \"985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee\": rpc error: code = NotFound desc = could not find container \"985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee\": container with ID starting with 985c33a7d01836b2152b1e3d735506f5075cac25e0e0f3d332b6b6f8a8a5e4ee not found: ID does not exist" Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.955036 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:39 crc kubenswrapper[4828]: I1129 07:39:39.964236 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fcmpv"] Nov 29 07:39:41 crc kubenswrapper[4828]: I1129 07:39:41.441015 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" path="/var/lib/kubelet/pods/5c90a227-696d-4787-9eb9-ff2e61a4888c/volumes" Nov 29 07:39:46 crc kubenswrapper[4828]: I1129 07:39:46.412933 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:39:46 crc kubenswrapper[4828]: E1129 07:39:46.414777 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:39:58 crc kubenswrapper[4828]: I1129 07:39:58.412399 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:39:58 crc kubenswrapper[4828]: E1129 07:39:58.413182 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:40:09 crc kubenswrapper[4828]: I1129 07:40:09.412214 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:40:09 crc kubenswrapper[4828]: E1129 07:40:09.413371 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:40:21 crc kubenswrapper[4828]: I1129 07:40:21.012311 4828 generic.go:334] "Generic (PLEG): container finished" podID="55539b0e-2552-4e7c-89f0-e67ae0f38aba" containerID="e5840c83acea6812547ad5b862fb39b87a4b97a3f64d8c59ee52b674ec2d3d70" exitCode=0 Nov 29 07:40:21 crc kubenswrapper[4828]: I1129 07:40:21.012404 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" event={"ID":"55539b0e-2552-4e7c-89f0-e67ae0f38aba","Type":"ContainerDied","Data":"e5840c83acea6812547ad5b862fb39b87a4b97a3f64d8c59ee52b674ec2d3d70"} Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.411474 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:40:22 crc kubenswrapper[4828]: E1129 07:40:22.411973 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.432900 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.539446 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key\") pod \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.539568 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pctqn\" (UniqueName: \"kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn\") pod \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.539714 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory\") pod \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\" (UID: \"55539b0e-2552-4e7c-89f0-e67ae0f38aba\") " Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.547757 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn" (OuterVolumeSpecName: "kube-api-access-pctqn") pod "55539b0e-2552-4e7c-89f0-e67ae0f38aba" (UID: "55539b0e-2552-4e7c-89f0-e67ae0f38aba"). InnerVolumeSpecName "kube-api-access-pctqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.576099 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55539b0e-2552-4e7c-89f0-e67ae0f38aba" (UID: "55539b0e-2552-4e7c-89f0-e67ae0f38aba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.583009 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory" (OuterVolumeSpecName: "inventory") pod "55539b0e-2552-4e7c-89f0-e67ae0f38aba" (UID: "55539b0e-2552-4e7c-89f0-e67ae0f38aba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.642785 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.642835 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55539b0e-2552-4e7c-89f0-e67ae0f38aba-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4828]: I1129 07:40:22.642848 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pctqn\" (UniqueName: \"kubernetes.io/projected/55539b0e-2552-4e7c-89f0-e67ae0f38aba-kube-api-access-pctqn\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.031429 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" event={"ID":"55539b0e-2552-4e7c-89f0-e67ae0f38aba","Type":"ContainerDied","Data":"2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59"} Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.031505 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2557400dfb4b72cdf3b776fe318fa36df33531af2bc6baf4d474f6a74059cb59" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.031569 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.128688 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qj8wj"] Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129343 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55539b0e-2552-4e7c-89f0-e67ae0f38aba" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129386 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="55539b0e-2552-4e7c-89f0-e67ae0f38aba" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129408 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="extract-content" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129416 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="extract-content" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129430 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129437 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129467 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="extract-utilities" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129475 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="extract-utilities" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129491 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="extract-content" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129499 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="extract-content" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129518 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129525 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: E1129 07:40:23.129544 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="extract-utilities" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129552 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="extract-utilities" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129822 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="55539b0e-2552-4e7c-89f0-e67ae0f38aba" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129846 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1da7301-4ede-4ba5-ae08-32fbdc1a52b8" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.129867 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c90a227-696d-4787-9eb9-ff2e61a4888c" containerName="registry-server" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.131371 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.137833 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.137952 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.138083 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.138380 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.140421 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qj8wj"] Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.254078 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.254138 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkc6\" (UniqueName: \"kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.254725 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.355906 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.355979 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.356008 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkc6\" (UniqueName: \"kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.360578 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.364137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.371754 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkc6\" (UniqueName: \"kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6\") pod \"ssh-known-hosts-edpm-deployment-qj8wj\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.458179 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.985180 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qj8wj"] Nov 29 07:40:23 crc kubenswrapper[4828]: I1129 07:40:23.997959 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:40:24 crc kubenswrapper[4828]: I1129 07:40:24.041138 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" event={"ID":"5dadc365-25bc-43a9-9e8a-c17749832d20","Type":"ContainerStarted","Data":"f4776654151003c9537feee109d809bbe90d19c54447c8446fc5b4395658683d"} Nov 29 07:40:26 crc kubenswrapper[4828]: I1129 07:40:26.057296 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" event={"ID":"5dadc365-25bc-43a9-9e8a-c17749832d20","Type":"ContainerStarted","Data":"c24cbb086fbed449d4e12add9e91495574cbdc85299899f4547beaad817bb171"} Nov 29 07:40:26 crc kubenswrapper[4828]: I1129 07:40:26.079175 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" podStartSLOduration=1.899860128 podStartE2EDuration="3.079147551s" podCreationTimestamp="2025-11-29 07:40:23 +0000 UTC" firstStartedPulling="2025-11-29 07:40:23.99774552 +0000 UTC m=+2363.619821578" lastFinishedPulling="2025-11-29 07:40:25.177032943 +0000 UTC m=+2364.799109001" observedRunningTime="2025-11-29 07:40:26.072615135 +0000 UTC m=+2365.694691193" watchObservedRunningTime="2025-11-29 07:40:26.079147551 +0000 UTC m=+2365.701223609" Nov 29 07:40:33 crc kubenswrapper[4828]: I1129 07:40:33.123048 4828 generic.go:334] "Generic (PLEG): container finished" podID="5dadc365-25bc-43a9-9e8a-c17749832d20" containerID="c24cbb086fbed449d4e12add9e91495574cbdc85299899f4547beaad817bb171" exitCode=0 Nov 29 07:40:33 crc kubenswrapper[4828]: I1129 07:40:33.123119 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" event={"ID":"5dadc365-25bc-43a9-9e8a-c17749832d20","Type":"ContainerDied","Data":"c24cbb086fbed449d4e12add9e91495574cbdc85299899f4547beaad817bb171"} Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.547182 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.725005 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0\") pod \"5dadc365-25bc-43a9-9e8a-c17749832d20\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.725430 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxkc6\" (UniqueName: \"kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6\") pod \"5dadc365-25bc-43a9-9e8a-c17749832d20\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.725737 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam\") pod \"5dadc365-25bc-43a9-9e8a-c17749832d20\" (UID: \"5dadc365-25bc-43a9-9e8a-c17749832d20\") " Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.731518 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6" (OuterVolumeSpecName: "kube-api-access-bxkc6") pod "5dadc365-25bc-43a9-9e8a-c17749832d20" (UID: "5dadc365-25bc-43a9-9e8a-c17749832d20"). InnerVolumeSpecName "kube-api-access-bxkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.754037 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5dadc365-25bc-43a9-9e8a-c17749832d20" (UID: "5dadc365-25bc-43a9-9e8a-c17749832d20"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.755686 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5dadc365-25bc-43a9-9e8a-c17749832d20" (UID: "5dadc365-25bc-43a9-9e8a-c17749832d20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.829291 4828 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.829336 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxkc6\" (UniqueName: \"kubernetes.io/projected/5dadc365-25bc-43a9-9e8a-c17749832d20-kube-api-access-bxkc6\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4828]: I1129 07:40:34.829351 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dadc365-25bc-43a9-9e8a-c17749832d20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.145864 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" event={"ID":"5dadc365-25bc-43a9-9e8a-c17749832d20","Type":"ContainerDied","Data":"f4776654151003c9537feee109d809bbe90d19c54447c8446fc5b4395658683d"} Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.145922 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qj8wj" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.145940 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4776654151003c9537feee109d809bbe90d19c54447c8446fc5b4395658683d" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.212062 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt"] Nov 29 07:40:35 crc kubenswrapper[4828]: E1129 07:40:35.212993 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dadc365-25bc-43a9-9e8a-c17749832d20" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.213034 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dadc365-25bc-43a9-9e8a-c17749832d20" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.213328 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dadc365-25bc-43a9-9e8a-c17749832d20" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.214217 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.217076 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.217648 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.217756 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.218223 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.245045 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt"] Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.340102 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.340236 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bctg\" (UniqueName: \"kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.340302 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.411617 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:40:35 crc kubenswrapper[4828]: E1129 07:40:35.411916 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.442796 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.443122 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bctg\" (UniqueName: \"kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.443859 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.447194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.447316 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.459935 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bctg\" (UniqueName: \"kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz5xt\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:35 crc kubenswrapper[4828]: I1129 07:40:35.567466 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:36 crc kubenswrapper[4828]: I1129 07:40:36.085107 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt"] Nov 29 07:40:36 crc kubenswrapper[4828]: I1129 07:40:36.156489 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" event={"ID":"933e9cb6-fe3b-4e84-869c-ee299d147048","Type":"ContainerStarted","Data":"493fea0d8c0209b9b7bd07ae13620c45250d0fd54a80c0336039cfde2e73dc88"} Nov 29 07:40:37 crc kubenswrapper[4828]: I1129 07:40:37.168863 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" event={"ID":"933e9cb6-fe3b-4e84-869c-ee299d147048","Type":"ContainerStarted","Data":"5b1d18a39e36cedf4b638b605786ea69e35ea6d7929c96b06b82190e8228a710"} Nov 29 07:40:37 crc kubenswrapper[4828]: I1129 07:40:37.227638 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" podStartSLOduration=1.553173221 podStartE2EDuration="2.227608359s" podCreationTimestamp="2025-11-29 07:40:35 +0000 UTC" firstStartedPulling="2025-11-29 07:40:36.091207152 +0000 UTC m=+2375.713283210" lastFinishedPulling="2025-11-29 07:40:36.76564229 +0000 UTC m=+2376.387718348" observedRunningTime="2025-11-29 07:40:37.182312563 +0000 UTC m=+2376.804388621" watchObservedRunningTime="2025-11-29 07:40:37.227608359 +0000 UTC m=+2376.849684407" Nov 29 07:40:46 crc kubenswrapper[4828]: I1129 07:40:46.262579 4828 generic.go:334] "Generic (PLEG): container finished" podID="933e9cb6-fe3b-4e84-869c-ee299d147048" containerID="5b1d18a39e36cedf4b638b605786ea69e35ea6d7929c96b06b82190e8228a710" exitCode=0 Nov 29 07:40:46 crc kubenswrapper[4828]: I1129 07:40:46.262676 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" event={"ID":"933e9cb6-fe3b-4e84-869c-ee299d147048","Type":"ContainerDied","Data":"5b1d18a39e36cedf4b638b605786ea69e35ea6d7929c96b06b82190e8228a710"} Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.690901 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.823900 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bctg\" (UniqueName: \"kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg\") pod \"933e9cb6-fe3b-4e84-869c-ee299d147048\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.824079 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key\") pod \"933e9cb6-fe3b-4e84-869c-ee299d147048\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.824123 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory\") pod \"933e9cb6-fe3b-4e84-869c-ee299d147048\" (UID: \"933e9cb6-fe3b-4e84-869c-ee299d147048\") " Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.834727 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg" (OuterVolumeSpecName: "kube-api-access-7bctg") pod "933e9cb6-fe3b-4e84-869c-ee299d147048" (UID: "933e9cb6-fe3b-4e84-869c-ee299d147048"). InnerVolumeSpecName "kube-api-access-7bctg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.854183 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "933e9cb6-fe3b-4e84-869c-ee299d147048" (UID: "933e9cb6-fe3b-4e84-869c-ee299d147048"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.858175 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory" (OuterVolumeSpecName: "inventory") pod "933e9cb6-fe3b-4e84-869c-ee299d147048" (UID: "933e9cb6-fe3b-4e84-869c-ee299d147048"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.927305 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bctg\" (UniqueName: \"kubernetes.io/projected/933e9cb6-fe3b-4e84-869c-ee299d147048-kube-api-access-7bctg\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.927386 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:47 crc kubenswrapper[4828]: I1129 07:40:47.927400 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/933e9cb6-fe3b-4e84-869c-ee299d147048-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.283810 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" event={"ID":"933e9cb6-fe3b-4e84-869c-ee299d147048","Type":"ContainerDied","Data":"493fea0d8c0209b9b7bd07ae13620c45250d0fd54a80c0336039cfde2e73dc88"} Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.284135 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493fea0d8c0209b9b7bd07ae13620c45250d0fd54a80c0336039cfde2e73dc88" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.283875 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz5xt" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.362089 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r"] Nov 29 07:40:48 crc kubenswrapper[4828]: E1129 07:40:48.362605 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933e9cb6-fe3b-4e84-869c-ee299d147048" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.362631 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="933e9cb6-fe3b-4e84-869c-ee299d147048" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.362888 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="933e9cb6-fe3b-4e84-869c-ee299d147048" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.363840 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.366191 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.366484 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.366548 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.366760 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.373167 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r"] Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.539814 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.539952 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxc4\" (UniqueName: \"kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.539983 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.641954 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxc4\" (UniqueName: \"kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.642002 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.642154 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.646413 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.649346 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.660178 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxc4\" (UniqueName: \"kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:48 crc kubenswrapper[4828]: I1129 07:40:48.699536 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:40:49 crc kubenswrapper[4828]: I1129 07:40:49.259164 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r"] Nov 29 07:40:49 crc kubenswrapper[4828]: I1129 07:40:49.320895 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" event={"ID":"73db3b43-20c5-4549-9414-3a352d30b599","Type":"ContainerStarted","Data":"d30ba16d4b50e58cfc6e484028eba2631f964684c2f2b688899f4819b351a034"} Nov 29 07:40:49 crc kubenswrapper[4828]: I1129 07:40:49.411530 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:40:49 crc kubenswrapper[4828]: E1129 07:40:49.411827 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:40:50 crc kubenswrapper[4828]: I1129 07:40:50.331311 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" event={"ID":"73db3b43-20c5-4549-9414-3a352d30b599","Type":"ContainerStarted","Data":"75d5c3189e6d7eb262115255ace43f124e8308170bf342a57314991b258a1f84"} Nov 29 07:40:50 crc kubenswrapper[4828]: I1129 07:40:50.355855 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" podStartSLOduration=1.856584055 podStartE2EDuration="2.355825854s" podCreationTimestamp="2025-11-29 07:40:48 +0000 UTC" firstStartedPulling="2025-11-29 07:40:49.260259488 +0000 UTC m=+2388.882335556" lastFinishedPulling="2025-11-29 07:40:49.759501297 +0000 UTC m=+2389.381577355" observedRunningTime="2025-11-29 07:40:50.346990408 +0000 UTC m=+2389.969066476" watchObservedRunningTime="2025-11-29 07:40:50.355825854 +0000 UTC m=+2389.977901912" Nov 29 07:41:00 crc kubenswrapper[4828]: I1129 07:41:00.412648 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:41:00 crc kubenswrapper[4828]: E1129 07:41:00.413362 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:41:02 crc kubenswrapper[4828]: I1129 07:41:02.843801 4828 generic.go:334] "Generic (PLEG): container finished" podID="73db3b43-20c5-4549-9414-3a352d30b599" containerID="75d5c3189e6d7eb262115255ace43f124e8308170bf342a57314991b258a1f84" exitCode=0 Nov 29 07:41:02 crc kubenswrapper[4828]: I1129 07:41:02.843893 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" event={"ID":"73db3b43-20c5-4549-9414-3a352d30b599","Type":"ContainerDied","Data":"75d5c3189e6d7eb262115255ace43f124e8308170bf342a57314991b258a1f84"} Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.262142 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.291313 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key\") pod \"73db3b43-20c5-4549-9414-3a352d30b599\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.291393 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory\") pod \"73db3b43-20c5-4549-9414-3a352d30b599\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.291418 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkxc4\" (UniqueName: \"kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4\") pod \"73db3b43-20c5-4549-9414-3a352d30b599\" (UID: \"73db3b43-20c5-4549-9414-3a352d30b599\") " Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.310950 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4" (OuterVolumeSpecName: "kube-api-access-lkxc4") pod "73db3b43-20c5-4549-9414-3a352d30b599" (UID: "73db3b43-20c5-4549-9414-3a352d30b599"). InnerVolumeSpecName "kube-api-access-lkxc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.331769 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory" (OuterVolumeSpecName: "inventory") pod "73db3b43-20c5-4549-9414-3a352d30b599" (UID: "73db3b43-20c5-4549-9414-3a352d30b599"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.333843 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "73db3b43-20c5-4549-9414-3a352d30b599" (UID: "73db3b43-20c5-4549-9414-3a352d30b599"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.394139 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.394188 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkxc4\" (UniqueName: \"kubernetes.io/projected/73db3b43-20c5-4549-9414-3a352d30b599-kube-api-access-lkxc4\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.394202 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73db3b43-20c5-4549-9414-3a352d30b599-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.864907 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" event={"ID":"73db3b43-20c5-4549-9414-3a352d30b599","Type":"ContainerDied","Data":"d30ba16d4b50e58cfc6e484028eba2631f964684c2f2b688899f4819b351a034"} Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.864971 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30ba16d4b50e58cfc6e484028eba2631f964684c2f2b688899f4819b351a034" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.864979 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.948484 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44"] Nov 29 07:41:04 crc kubenswrapper[4828]: E1129 07:41:04.948953 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73db3b43-20c5-4549-9414-3a352d30b599" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.948981 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="73db3b43-20c5-4549-9414-3a352d30b599" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.949325 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="73db3b43-20c5-4549-9414-3a352d30b599" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.950178 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956559 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956750 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956893 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956918 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956993 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.957157 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.956741 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.958194 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 29 07:41:04 crc kubenswrapper[4828]: I1129 07:41:04.958763 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44"] Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.001918 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.001968 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.001993 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002040 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002161 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002290 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6nvl\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002538 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002590 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002615 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002727 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002809 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002869 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.002954 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105490 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105573 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105612 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105643 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105736 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105761 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105826 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105862 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105887 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nvl\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105928 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.105987 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.106012 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.106041 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.116015 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.116692 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.116780 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.116839 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.116917 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.117858 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.118163 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.118555 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.118611 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.118776 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.119172 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.125426 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.131839 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.139399 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nvl\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h6k44\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.269686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.815025 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44"] Nov 29 07:41:05 crc kubenswrapper[4828]: I1129 07:41:05.876212 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" event={"ID":"2da06014-9f35-43c6-88f3-7e9f6ffd3baf","Type":"ContainerStarted","Data":"ef9a3eb5ef4e7b8175cc3f57ad69a3b6c179fe1c4b5b19edf617255024327d86"} Nov 29 07:41:06 crc kubenswrapper[4828]: I1129 07:41:06.887530 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" event={"ID":"2da06014-9f35-43c6-88f3-7e9f6ffd3baf","Type":"ContainerStarted","Data":"6abc548af2b70de737f3a4bc21400cd302a212f7c3d3aa2dff20bcf18090c262"} Nov 29 07:41:06 crc kubenswrapper[4828]: I1129 07:41:06.926400 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" podStartSLOduration=2.278920105 podStartE2EDuration="2.926373146s" podCreationTimestamp="2025-11-29 07:41:04 +0000 UTC" firstStartedPulling="2025-11-29 07:41:05.823422612 +0000 UTC m=+2405.445498670" lastFinishedPulling="2025-11-29 07:41:06.470875653 +0000 UTC m=+2406.092951711" observedRunningTime="2025-11-29 07:41:06.919227004 +0000 UTC m=+2406.541303062" watchObservedRunningTime="2025-11-29 07:41:06.926373146 +0000 UTC m=+2406.548449204" Nov 29 07:41:13 crc kubenswrapper[4828]: I1129 07:41:13.411773 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:41:13 crc kubenswrapper[4828]: E1129 07:41:13.412958 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:41:25 crc kubenswrapper[4828]: I1129 07:41:25.411607 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:41:25 crc kubenswrapper[4828]: E1129 07:41:25.413309 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:41:40 crc kubenswrapper[4828]: I1129 07:41:40.411716 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:41:40 crc kubenswrapper[4828]: E1129 07:41:40.412484 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:41:47 crc kubenswrapper[4828]: I1129 07:41:47.360811 4828 generic.go:334] "Generic (PLEG): container finished" podID="2da06014-9f35-43c6-88f3-7e9f6ffd3baf" containerID="6abc548af2b70de737f3a4bc21400cd302a212f7c3d3aa2dff20bcf18090c262" exitCode=0 Nov 29 07:41:47 crc kubenswrapper[4828]: I1129 07:41:47.360919 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" event={"ID":"2da06014-9f35-43c6-88f3-7e9f6ffd3baf","Type":"ContainerDied","Data":"6abc548af2b70de737f3a4bc21400cd302a212f7c3d3aa2dff20bcf18090c262"} Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.810918 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.936992 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937082 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937138 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937190 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937225 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937291 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937362 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937389 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937483 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6nvl\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937555 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937639 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937667 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.937696 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle\") pod \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\" (UID: \"2da06014-9f35-43c6-88f3-7e9f6ffd3baf\") " Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.964974 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.965744 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.967644 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.970643 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.970813 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl" (OuterVolumeSpecName: "kube-api-access-r6nvl") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "kube-api-access-r6nvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.970942 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.971144 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.972405 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.974525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.976114 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.977404 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4828]: I1129 07:41:48.986459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.022547 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory" (OuterVolumeSpecName: "inventory") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.026628 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2da06014-9f35-43c6-88f3-7e9f6ffd3baf" (UID: "2da06014-9f35-43c6-88f3-7e9f6ffd3baf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040860 4828 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040900 4828 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040916 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040931 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6nvl\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-kube-api-access-r6nvl\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040945 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040959 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040970 4828 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040981 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.040991 4828 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.041003 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.041014 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.041026 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.041037 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.041052 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2da06014-9f35-43c6-88f3-7e9f6ffd3baf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.380522 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" event={"ID":"2da06014-9f35-43c6-88f3-7e9f6ffd3baf","Type":"ContainerDied","Data":"ef9a3eb5ef4e7b8175cc3f57ad69a3b6c179fe1c4b5b19edf617255024327d86"} Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.380906 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9a3eb5ef4e7b8175cc3f57ad69a3b6c179fe1c4b5b19edf617255024327d86" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.380584 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h6k44" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.472163 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk"] Nov 29 07:41:49 crc kubenswrapper[4828]: E1129 07:41:49.472629 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da06014-9f35-43c6-88f3-7e9f6ffd3baf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.472660 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da06014-9f35-43c6-88f3-7e9f6ffd3baf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.472922 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da06014-9f35-43c6-88f3-7e9f6ffd3baf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.473639 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.475942 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.476007 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.477451 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.477500 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.481522 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.491406 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk"] Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.653264 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.653338 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.653440 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.653521 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.653555 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dgq\" (UniqueName: \"kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.755092 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.755190 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.755238 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dgq\" (UniqueName: \"kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.755747 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.755794 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.756184 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.759746 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.760403 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.761214 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.778405 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dgq\" (UniqueName: \"kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-894mk\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:49 crc kubenswrapper[4828]: I1129 07:41:49.797606 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:41:50 crc kubenswrapper[4828]: I1129 07:41:50.332307 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk"] Nov 29 07:41:50 crc kubenswrapper[4828]: I1129 07:41:50.391679 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" event={"ID":"3ba051a3-9160-4e2d-85b3-88f7c43c00c7","Type":"ContainerStarted","Data":"b0e4f87329489b9d5aba005753f3e6532f6a3ff1040a3366eef0bdcfe0ac145d"} Nov 29 07:41:51 crc kubenswrapper[4828]: I1129 07:41:51.406940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" event={"ID":"3ba051a3-9160-4e2d-85b3-88f7c43c00c7","Type":"ContainerStarted","Data":"245a25655b71af031f841ed6c356d259ad95d7ed1b1b6227e0b00853ee4c0576"} Nov 29 07:41:54 crc kubenswrapper[4828]: I1129 07:41:54.412250 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:41:54 crc kubenswrapper[4828]: E1129 07:41:54.413428 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:42:09 crc kubenswrapper[4828]: I1129 07:42:09.412734 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:42:09 crc kubenswrapper[4828]: E1129 07:42:09.413612 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:42:20 crc kubenswrapper[4828]: I1129 07:42:20.412497 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:42:20 crc kubenswrapper[4828]: E1129 07:42:20.413446 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:42:34 crc kubenswrapper[4828]: I1129 07:42:34.412080 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:42:34 crc kubenswrapper[4828]: E1129 07:42:34.414328 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:42:48 crc kubenswrapper[4828]: I1129 07:42:48.412056 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:42:48 crc kubenswrapper[4828]: E1129 07:42:48.415379 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:00 crc kubenswrapper[4828]: I1129 07:43:00.313725 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ba051a3-9160-4e2d-85b3-88f7c43c00c7" containerID="245a25655b71af031f841ed6c356d259ad95d7ed1b1b6227e0b00853ee4c0576" exitCode=0 Nov 29 07:43:00 crc kubenswrapper[4828]: I1129 07:43:00.313795 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" event={"ID":"3ba051a3-9160-4e2d-85b3-88f7c43c00c7","Type":"ContainerDied","Data":"245a25655b71af031f841ed6c356d259ad95d7ed1b1b6227e0b00853ee4c0576"} Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.791134 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.979484 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27dgq\" (UniqueName: \"kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq\") pod \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.979695 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0\") pod \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.979792 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key\") pod \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.979830 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory\") pod \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.979914 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle\") pod \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\" (UID: \"3ba051a3-9160-4e2d-85b3-88f7c43c00c7\") " Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.986509 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq" (OuterVolumeSpecName: "kube-api-access-27dgq") pod "3ba051a3-9160-4e2d-85b3-88f7c43c00c7" (UID: "3ba051a3-9160-4e2d-85b3-88f7c43c00c7"). InnerVolumeSpecName "kube-api-access-27dgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:01 crc kubenswrapper[4828]: I1129 07:43:01.989493 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3ba051a3-9160-4e2d-85b3-88f7c43c00c7" (UID: "3ba051a3-9160-4e2d-85b3-88f7c43c00c7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.011792 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3ba051a3-9160-4e2d-85b3-88f7c43c00c7" (UID: "3ba051a3-9160-4e2d-85b3-88f7c43c00c7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.012121 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory" (OuterVolumeSpecName: "inventory") pod "3ba051a3-9160-4e2d-85b3-88f7c43c00c7" (UID: "3ba051a3-9160-4e2d-85b3-88f7c43c00c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.014549 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3ba051a3-9160-4e2d-85b3-88f7c43c00c7" (UID: "3ba051a3-9160-4e2d-85b3-88f7c43c00c7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.081964 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.082000 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.082014 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.082042 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27dgq\" (UniqueName: \"kubernetes.io/projected/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-kube-api-access-27dgq\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.082054 4828 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3ba051a3-9160-4e2d-85b3-88f7c43c00c7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.340570 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" event={"ID":"3ba051a3-9160-4e2d-85b3-88f7c43c00c7","Type":"ContainerDied","Data":"b0e4f87329489b9d5aba005753f3e6532f6a3ff1040a3366eef0bdcfe0ac145d"} Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.340659 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e4f87329489b9d5aba005753f3e6532f6a3ff1040a3366eef0bdcfe0ac145d" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.340666 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-894mk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.450755 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:43:02 crc kubenswrapper[4828]: E1129 07:43:02.451315 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.485113 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk"] Nov 29 07:43:02 crc kubenswrapper[4828]: E1129 07:43:02.485684 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba051a3-9160-4e2d-85b3-88f7c43c00c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.485723 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba051a3-9160-4e2d-85b3-88f7c43c00c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.486025 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba051a3-9160-4e2d-85b3-88f7c43c00c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.486973 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.490659 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.491030 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.491166 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.491329 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.491498 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.494750 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.499295 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk"] Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.593993 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlxj\" (UniqueName: \"kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.594163 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.594390 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.594426 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.594714 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.594969 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695644 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlxj\" (UniqueName: \"kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695735 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695812 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695830 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.695921 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.702990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.703204 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.703404 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.703754 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.704251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.715111 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlxj\" (UniqueName: \"kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:02 crc kubenswrapper[4828]: I1129 07:43:02.809416 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:03 crc kubenswrapper[4828]: I1129 07:43:03.342735 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk"] Nov 29 07:43:04 crc kubenswrapper[4828]: I1129 07:43:04.363968 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" event={"ID":"6494a5a0-15bc-42c7-a812-8ca66317bea7","Type":"ContainerStarted","Data":"cd6d7247d3a3fa21157e14e4e1493ca6db190cf3ce6fb81b888a9f238f31d5da"} Nov 29 07:43:05 crc kubenswrapper[4828]: I1129 07:43:05.375036 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" event={"ID":"6494a5a0-15bc-42c7-a812-8ca66317bea7","Type":"ContainerStarted","Data":"076d1fe5004a8b002dce4b73e534252c098f7b21bdaccd128ba5b24fc6af309d"} Nov 29 07:43:05 crc kubenswrapper[4828]: I1129 07:43:05.392021 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" podStartSLOduration=2.50245651 podStartE2EDuration="3.391955904s" podCreationTimestamp="2025-11-29 07:43:02 +0000 UTC" firstStartedPulling="2025-11-29 07:43:03.354254079 +0000 UTC m=+2522.976330137" lastFinishedPulling="2025-11-29 07:43:04.243753473 +0000 UTC m=+2523.865829531" observedRunningTime="2025-11-29 07:43:05.390015246 +0000 UTC m=+2525.012091314" watchObservedRunningTime="2025-11-29 07:43:05.391955904 +0000 UTC m=+2525.014031982" Nov 29 07:43:14 crc kubenswrapper[4828]: I1129 07:43:14.412013 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:43:14 crc kubenswrapper[4828]: E1129 07:43:14.412899 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:28 crc kubenswrapper[4828]: I1129 07:43:28.412409 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:43:28 crc kubenswrapper[4828]: E1129 07:43:28.413382 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:41 crc kubenswrapper[4828]: I1129 07:43:41.418077 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:43:41 crc kubenswrapper[4828]: E1129 07:43:41.420065 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:55 crc kubenswrapper[4828]: I1129 07:43:55.849264 4828 generic.go:334] "Generic (PLEG): container finished" podID="6494a5a0-15bc-42c7-a812-8ca66317bea7" containerID="076d1fe5004a8b002dce4b73e534252c098f7b21bdaccd128ba5b24fc6af309d" exitCode=0 Nov 29 07:43:55 crc kubenswrapper[4828]: I1129 07:43:55.849357 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" event={"ID":"6494a5a0-15bc-42c7-a812-8ca66317bea7","Type":"ContainerDied","Data":"076d1fe5004a8b002dce4b73e534252c098f7b21bdaccd128ba5b24fc6af309d"} Nov 29 07:43:56 crc kubenswrapper[4828]: I1129 07:43:56.412889 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:43:56 crc kubenswrapper[4828]: E1129 07:43:56.413195 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.302082 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.387978 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.388167 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.388224 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.388254 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.388319 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.388552 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dlxj\" (UniqueName: \"kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj\") pod \"6494a5a0-15bc-42c7-a812-8ca66317bea7\" (UID: \"6494a5a0-15bc-42c7-a812-8ca66317bea7\") " Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.399260 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.420015 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj" (OuterVolumeSpecName: "kube-api-access-8dlxj") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "kube-api-access-8dlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.422246 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.424803 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory" (OuterVolumeSpecName: "inventory") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.432621 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.446316 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6494a5a0-15bc-42c7-a812-8ca66317bea7" (UID: "6494a5a0-15bc-42c7-a812-8ca66317bea7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490359 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490404 4828 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490414 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490424 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490434 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dlxj\" (UniqueName: \"kubernetes.io/projected/6494a5a0-15bc-42c7-a812-8ca66317bea7-kube-api-access-8dlxj\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.490443 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6494a5a0-15bc-42c7-a812-8ca66317bea7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.868794 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" event={"ID":"6494a5a0-15bc-42c7-a812-8ca66317bea7","Type":"ContainerDied","Data":"cd6d7247d3a3fa21157e14e4e1493ca6db190cf3ce6fb81b888a9f238f31d5da"} Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.868864 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6d7247d3a3fa21157e14e4e1493ca6db190cf3ce6fb81b888a9f238f31d5da" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.868883 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.960258 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4"] Nov 29 07:43:57 crc kubenswrapper[4828]: E1129 07:43:57.961065 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6494a5a0-15bc-42c7-a812-8ca66317bea7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.961093 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6494a5a0-15bc-42c7-a812-8ca66317bea7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.961367 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6494a5a0-15bc-42c7-a812-8ca66317bea7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.962125 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.965387 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.965646 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.965729 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.965938 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.970079 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 29 07:43:57 crc kubenswrapper[4828]: I1129 07:43:57.985686 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4"] Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.000452 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssgwz\" (UniqueName: \"kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.000868 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.001062 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.001181 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.001462 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.102960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.103109 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssgwz\" (UniqueName: \"kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.103213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.103261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.103333 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.107211 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.107892 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.107930 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.108183 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.121440 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssgwz\" (UniqueName: \"kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.284819 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.779354 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4"] Nov 29 07:43:58 crc kubenswrapper[4828]: I1129 07:43:58.878092 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" event={"ID":"c081856d-532f-4357-958b-b4c2070abbbf","Type":"ContainerStarted","Data":"782257d263cd6d85b3cae1b28765eaac366972c9907991cd6229b02bbd54b5b7"} Nov 29 07:43:59 crc kubenswrapper[4828]: I1129 07:43:59.896292 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" event={"ID":"c081856d-532f-4357-958b-b4c2070abbbf","Type":"ContainerStarted","Data":"46ed4cf7e9ea3fc43aeb04a4890cea0b1a3b353a54bee31b431fe87aeb02c162"} Nov 29 07:43:59 crc kubenswrapper[4828]: I1129 07:43:59.925988 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" podStartSLOduration=2.347166146 podStartE2EDuration="2.925952732s" podCreationTimestamp="2025-11-29 07:43:57 +0000 UTC" firstStartedPulling="2025-11-29 07:43:58.786401594 +0000 UTC m=+2578.408477652" lastFinishedPulling="2025-11-29 07:43:59.36518818 +0000 UTC m=+2578.987264238" observedRunningTime="2025-11-29 07:43:59.914410978 +0000 UTC m=+2579.536487046" watchObservedRunningTime="2025-11-29 07:43:59.925952732 +0000 UTC m=+2579.548028800" Nov 29 07:44:11 crc kubenswrapper[4828]: I1129 07:44:11.418202 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:44:11 crc kubenswrapper[4828]: E1129 07:44:11.418960 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:44:25 crc kubenswrapper[4828]: I1129 07:44:25.425669 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:44:26 crc kubenswrapper[4828]: I1129 07:44:26.123210 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc"} Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.151022 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6"] Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.152699 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.155663 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.155758 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.172351 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6"] Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.326741 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cq29\" (UniqueName: \"kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.327465 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.327530 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.429852 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.429929 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.429975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cq29\" (UniqueName: \"kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.431658 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.444683 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.447892 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cq29\" (UniqueName: \"kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29\") pod \"collect-profiles-29406705-lgxt6\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.483787 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:00 crc kubenswrapper[4828]: I1129 07:45:00.976342 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6"] Nov 29 07:45:01 crc kubenswrapper[4828]: I1129 07:45:01.653043 4828 generic.go:334] "Generic (PLEG): container finished" podID="36114bc6-e6dd-4444-995e-a46842d46405" containerID="3e393c1a80bc522a8426670b37a8546d4f5b89ff878dbb2811dceabe835899f4" exitCode=0 Nov 29 07:45:01 crc kubenswrapper[4828]: I1129 07:45:01.653263 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" event={"ID":"36114bc6-e6dd-4444-995e-a46842d46405","Type":"ContainerDied","Data":"3e393c1a80bc522a8426670b37a8546d4f5b89ff878dbb2811dceabe835899f4"} Nov 29 07:45:01 crc kubenswrapper[4828]: I1129 07:45:01.653437 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" event={"ID":"36114bc6-e6dd-4444-995e-a46842d46405","Type":"ContainerStarted","Data":"d3f27b4085f736474812ae6068de2e8e15885479414af893ebf0c70ecaaa6b25"} Nov 29 07:45:02 crc kubenswrapper[4828]: I1129 07:45:02.983691 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.078553 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume\") pod \"36114bc6-e6dd-4444-995e-a46842d46405\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.079061 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cq29\" (UniqueName: \"kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29\") pod \"36114bc6-e6dd-4444-995e-a46842d46405\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.079100 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume\") pod \"36114bc6-e6dd-4444-995e-a46842d46405\" (UID: \"36114bc6-e6dd-4444-995e-a46842d46405\") " Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.079841 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume" (OuterVolumeSpecName: "config-volume") pod "36114bc6-e6dd-4444-995e-a46842d46405" (UID: "36114bc6-e6dd-4444-995e-a46842d46405"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.085441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29" (OuterVolumeSpecName: "kube-api-access-2cq29") pod "36114bc6-e6dd-4444-995e-a46842d46405" (UID: "36114bc6-e6dd-4444-995e-a46842d46405"). InnerVolumeSpecName "kube-api-access-2cq29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.085610 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "36114bc6-e6dd-4444-995e-a46842d46405" (UID: "36114bc6-e6dd-4444-995e-a46842d46405"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.181688 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36114bc6-e6dd-4444-995e-a46842d46405-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.181727 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36114bc6-e6dd-4444-995e-a46842d46405-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.181737 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cq29\" (UniqueName: \"kubernetes.io/projected/36114bc6-e6dd-4444-995e-a46842d46405-kube-api-access-2cq29\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.673394 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" event={"ID":"36114bc6-e6dd-4444-995e-a46842d46405","Type":"ContainerDied","Data":"d3f27b4085f736474812ae6068de2e8e15885479414af893ebf0c70ecaaa6b25"} Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.673446 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f27b4085f736474812ae6068de2e8e15885479414af893ebf0c70ecaaa6b25" Nov 29 07:45:03 crc kubenswrapper[4828]: I1129 07:45:03.673450 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-lgxt6" Nov 29 07:45:04 crc kubenswrapper[4828]: I1129 07:45:04.061382 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn"] Nov 29 07:45:04 crc kubenswrapper[4828]: I1129 07:45:04.069563 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-wsbtn"] Nov 29 07:45:05 crc kubenswrapper[4828]: I1129 07:45:05.424900 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634b47b0-ce44-446c-8f87-531a593c576b" path="/var/lib/kubelet/pods/634b47b0-ce44-446c-8f87-531a593c576b/volumes" Nov 29 07:45:12 crc kubenswrapper[4828]: I1129 07:45:12.850037 4828 scope.go:117] "RemoveContainer" containerID="5f4e3d8563cc18899f9777785bb6fa3e9dfc253c4496cbf8b653ce938561f65b" Nov 29 07:46:41 crc kubenswrapper[4828]: I1129 07:46:41.487021 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:46:41 crc kubenswrapper[4828]: I1129 07:46:41.487672 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:47:11 crc kubenswrapper[4828]: I1129 07:47:11.487155 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:47:11 crc kubenswrapper[4828]: I1129 07:47:11.489014 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:47:28 crc kubenswrapper[4828]: I1129 07:47:28.921321 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:28 crc kubenswrapper[4828]: E1129 07:47:28.922491 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36114bc6-e6dd-4444-995e-a46842d46405" containerName="collect-profiles" Nov 29 07:47:28 crc kubenswrapper[4828]: I1129 07:47:28.922521 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="36114bc6-e6dd-4444-995e-a46842d46405" containerName="collect-profiles" Nov 29 07:47:28 crc kubenswrapper[4828]: I1129 07:47:28.922818 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="36114bc6-e6dd-4444-995e-a46842d46405" containerName="collect-profiles" Nov 29 07:47:28 crc kubenswrapper[4828]: I1129 07:47:28.924813 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:28 crc kubenswrapper[4828]: I1129 07:47:28.939715 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.051396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.051747 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.051954 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c94t\" (UniqueName: \"kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.153158 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c94t\" (UniqueName: \"kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.153319 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.153372 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.153937 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.153985 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.174586 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c94t\" (UniqueName: \"kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t\") pod \"redhat-operators-xhjqr\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.248461 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:29 crc kubenswrapper[4828]: I1129 07:47:29.792870 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:30 crc kubenswrapper[4828]: I1129 07:47:30.061070 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerStarted","Data":"86b86e41e2a886fff521862f9bf48ca0ec4aa539780ed7ce5e03e3a8af71f561"} Nov 29 07:47:31 crc kubenswrapper[4828]: I1129 07:47:31.086550 4828 generic.go:334] "Generic (PLEG): container finished" podID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerID="382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba" exitCode=0 Nov 29 07:47:31 crc kubenswrapper[4828]: I1129 07:47:31.086851 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerDied","Data":"382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba"} Nov 29 07:47:31 crc kubenswrapper[4828]: I1129 07:47:31.099937 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:47:35 crc kubenswrapper[4828]: I1129 07:47:35.137481 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerStarted","Data":"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484"} Nov 29 07:47:36 crc kubenswrapper[4828]: I1129 07:47:36.152047 4828 generic.go:334] "Generic (PLEG): container finished" podID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerID="87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484" exitCode=0 Nov 29 07:47:36 crc kubenswrapper[4828]: I1129 07:47:36.152297 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerDied","Data":"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484"} Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.202383 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerStarted","Data":"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184"} Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.226369 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xhjqr" podStartSLOduration=4.347525794 podStartE2EDuration="13.226333397s" podCreationTimestamp="2025-11-29 07:47:28 +0000 UTC" firstStartedPulling="2025-11-29 07:47:31.099593159 +0000 UTC m=+2790.721669217" lastFinishedPulling="2025-11-29 07:47:39.978400762 +0000 UTC m=+2799.600476820" observedRunningTime="2025-11-29 07:47:41.224985894 +0000 UTC m=+2800.847061962" watchObservedRunningTime="2025-11-29 07:47:41.226333397 +0000 UTC m=+2800.848409455" Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.486739 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.486811 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.486860 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.487678 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:47:41 crc kubenswrapper[4828]: I1129 07:47:41.487759 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc" gracePeriod=600 Nov 29 07:47:42 crc kubenswrapper[4828]: I1129 07:47:42.214137 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc" exitCode=0 Nov 29 07:47:42 crc kubenswrapper[4828]: I1129 07:47:42.214215 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc"} Nov 29 07:47:42 crc kubenswrapper[4828]: I1129 07:47:42.214306 4828 scope.go:117] "RemoveContainer" containerID="89892aaa9bf3db84f6ec24013b2bc7581e7e7953a4a5a6c6b0cd5232764f6130" Nov 29 07:47:43 crc kubenswrapper[4828]: I1129 07:47:43.226334 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456"} Nov 29 07:47:49 crc kubenswrapper[4828]: I1129 07:47:49.248964 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:49 crc kubenswrapper[4828]: I1129 07:47:49.249561 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:49 crc kubenswrapper[4828]: I1129 07:47:49.309118 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:49 crc kubenswrapper[4828]: I1129 07:47:49.357914 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:49 crc kubenswrapper[4828]: I1129 07:47:49.544812 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.299308 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xhjqr" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="registry-server" containerID="cri-o://c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184" gracePeriod=2 Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.742048 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.836010 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities\") pod \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.836083 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c94t\" (UniqueName: \"kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t\") pod \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.836190 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content\") pod \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\" (UID: \"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583\") " Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.837484 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities" (OuterVolumeSpecName: "utilities") pod "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" (UID: "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.862848 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t" (OuterVolumeSpecName: "kube-api-access-7c94t") pod "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" (UID: "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583"). InnerVolumeSpecName "kube-api-access-7c94t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.938308 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.938349 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c94t\" (UniqueName: \"kubernetes.io/projected/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-kube-api-access-7c94t\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:51 crc kubenswrapper[4828]: I1129 07:47:51.962398 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" (UID: "a72fbfcd-c9eb-497e-b5d0-255a5a0fb583"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.040410 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.311020 4828 generic.go:334] "Generic (PLEG): container finished" podID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerID="c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184" exitCode=0 Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.311065 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerDied","Data":"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184"} Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.311092 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhjqr" event={"ID":"a72fbfcd-c9eb-497e-b5d0-255a5a0fb583","Type":"ContainerDied","Data":"86b86e41e2a886fff521862f9bf48ca0ec4aa539780ed7ce5e03e3a8af71f561"} Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.311108 4828 scope.go:117] "RemoveContainer" containerID="c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.311108 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhjqr" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.338748 4828 scope.go:117] "RemoveContainer" containerID="87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.351302 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.362060 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xhjqr"] Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.384107 4828 scope.go:117] "RemoveContainer" containerID="382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.432839 4828 scope.go:117] "RemoveContainer" containerID="c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184" Nov 29 07:47:52 crc kubenswrapper[4828]: E1129 07:47:52.439726 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184\": container with ID starting with c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184 not found: ID does not exist" containerID="c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.439784 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184"} err="failed to get container status \"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184\": rpc error: code = NotFound desc = could not find container \"c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184\": container with ID starting with c220ba9050b14b947a1ff145c734a1f626a5817f71ad34caacf9724e066bc184 not found: ID does not exist" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.439814 4828 scope.go:117] "RemoveContainer" containerID="87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484" Nov 29 07:47:52 crc kubenswrapper[4828]: E1129 07:47:52.440588 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484\": container with ID starting with 87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484 not found: ID does not exist" containerID="87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.440628 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484"} err="failed to get container status \"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484\": rpc error: code = NotFound desc = could not find container \"87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484\": container with ID starting with 87c26b94b731ac5100ccffe79f286cb47a7bdd5ce9c83cc2bdf4ecc0188de484 not found: ID does not exist" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.440659 4828 scope.go:117] "RemoveContainer" containerID="382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba" Nov 29 07:47:52 crc kubenswrapper[4828]: E1129 07:47:52.441252 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba\": container with ID starting with 382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba not found: ID does not exist" containerID="382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba" Nov 29 07:47:52 crc kubenswrapper[4828]: I1129 07:47:52.441299 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba"} err="failed to get container status \"382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba\": rpc error: code = NotFound desc = could not find container \"382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba\": container with ID starting with 382572dfadd7081f0cbfd92a15401683552449633718e741b6e3477073e03cba not found: ID does not exist" Nov 29 07:47:53 crc kubenswrapper[4828]: I1129 07:47:53.425674 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" path="/var/lib/kubelet/pods/a72fbfcd-c9eb-497e-b5d0-255a5a0fb583/volumes" Nov 29 07:48:30 crc kubenswrapper[4828]: I1129 07:48:30.678364 4828 generic.go:334] "Generic (PLEG): container finished" podID="c081856d-532f-4357-958b-b4c2070abbbf" containerID="46ed4cf7e9ea3fc43aeb04a4890cea0b1a3b353a54bee31b431fe87aeb02c162" exitCode=0 Nov 29 07:48:30 crc kubenswrapper[4828]: I1129 07:48:30.678469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" event={"ID":"c081856d-532f-4357-958b-b4c2070abbbf","Type":"ContainerDied","Data":"46ed4cf7e9ea3fc43aeb04a4890cea0b1a3b353a54bee31b431fe87aeb02c162"} Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.216955 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.307782 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key\") pod \"c081856d-532f-4357-958b-b4c2070abbbf\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.308127 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0\") pod \"c081856d-532f-4357-958b-b4c2070abbbf\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.308335 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssgwz\" (UniqueName: \"kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz\") pod \"c081856d-532f-4357-958b-b4c2070abbbf\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.308513 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory\") pod \"c081856d-532f-4357-958b-b4c2070abbbf\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.309101 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle\") pod \"c081856d-532f-4357-958b-b4c2070abbbf\" (UID: \"c081856d-532f-4357-958b-b4c2070abbbf\") " Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.315289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c081856d-532f-4357-958b-b4c2070abbbf" (UID: "c081856d-532f-4357-958b-b4c2070abbbf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.315989 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz" (OuterVolumeSpecName: "kube-api-access-ssgwz") pod "c081856d-532f-4357-958b-b4c2070abbbf" (UID: "c081856d-532f-4357-958b-b4c2070abbbf"). InnerVolumeSpecName "kube-api-access-ssgwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.342520 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c081856d-532f-4357-958b-b4c2070abbbf" (UID: "c081856d-532f-4357-958b-b4c2070abbbf"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.345451 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c081856d-532f-4357-958b-b4c2070abbbf" (UID: "c081856d-532f-4357-958b-b4c2070abbbf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.348742 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory" (OuterVolumeSpecName: "inventory") pod "c081856d-532f-4357-958b-b4c2070abbbf" (UID: "c081856d-532f-4357-958b-b4c2070abbbf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.411171 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.411224 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssgwz\" (UniqueName: \"kubernetes.io/projected/c081856d-532f-4357-958b-b4c2070abbbf-kube-api-access-ssgwz\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.411241 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.411255 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.411384 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c081856d-532f-4357-958b-b4c2070abbbf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.697377 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" event={"ID":"c081856d-532f-4357-958b-b4c2070abbbf","Type":"ContainerDied","Data":"782257d263cd6d85b3cae1b28765eaac366972c9907991cd6229b02bbd54b5b7"} Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.697422 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782257d263cd6d85b3cae1b28765eaac366972c9907991cd6229b02bbd54b5b7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.697478 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.801180 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7"] Nov 29 07:48:32 crc kubenswrapper[4828]: E1129 07:48:32.802573 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c081856d-532f-4357-958b-b4c2070abbbf" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.802729 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c081856d-532f-4357-958b-b4c2070abbbf" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:32 crc kubenswrapper[4828]: E1129 07:48:32.802837 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="extract-content" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.802913 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="extract-content" Nov 29 07:48:32 crc kubenswrapper[4828]: E1129 07:48:32.803036 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="registry-server" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.803112 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="registry-server" Nov 29 07:48:32 crc kubenswrapper[4828]: E1129 07:48:32.803195 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="extract-utilities" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.803296 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="extract-utilities" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.803972 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72fbfcd-c9eb-497e-b5d0-255a5a0fb583" containerName="registry-server" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.804029 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c081856d-532f-4357-958b-b4c2070abbbf" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.804881 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.808621 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.808768 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.809564 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.809690 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.810077 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.810328 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.811716 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.813545 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7"] Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920466 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920607 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920639 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920766 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920794 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:32 crc kubenswrapper[4828]: I1129 07:48:32.920823 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzzc\" (UniqueName: \"kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.022887 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023004 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023728 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023817 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzzc\" (UniqueName: \"kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023913 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.023978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.024006 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.024034 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.025113 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.028526 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.028526 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.030198 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.030302 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.031423 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.032155 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.033783 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.044625 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzzc\" (UniqueName: \"kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xrjp7\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.124897 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.653409 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7"] Nov 29 07:48:33 crc kubenswrapper[4828]: W1129 07:48:33.659415 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod839a08fc_14bb_4b73_8028_6dec803de923.slice/crio-ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d WatchSource:0}: Error finding container ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d: Status 404 returned error can't find the container with id ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d Nov 29 07:48:33 crc kubenswrapper[4828]: I1129 07:48:33.710237 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" event={"ID":"839a08fc-14bb-4b73-8028-6dec803de923","Type":"ContainerStarted","Data":"ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d"} Nov 29 07:48:34 crc kubenswrapper[4828]: I1129 07:48:34.721681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" event={"ID":"839a08fc-14bb-4b73-8028-6dec803de923","Type":"ContainerStarted","Data":"2ca9ee34f0ee47af6d7c27e34d75de29e37f7840c9e0032b80fdea68ca8b0f1b"} Nov 29 07:48:34 crc kubenswrapper[4828]: I1129 07:48:34.747015 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" podStartSLOduration=2.305627581 podStartE2EDuration="2.746996375s" podCreationTimestamp="2025-11-29 07:48:32 +0000 UTC" firstStartedPulling="2025-11-29 07:48:33.663740623 +0000 UTC m=+2853.285816681" lastFinishedPulling="2025-11-29 07:48:34.105109417 +0000 UTC m=+2853.727185475" observedRunningTime="2025-11-29 07:48:34.745914689 +0000 UTC m=+2854.367990747" watchObservedRunningTime="2025-11-29 07:48:34.746996375 +0000 UTC m=+2854.369072433" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.829685 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.832580 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.841379 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.842921 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.843112 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.843218 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6bc\" (UniqueName: \"kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.944580 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.944684 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6bc\" (UniqueName: \"kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.944713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.945246 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.945260 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:40 crc kubenswrapper[4828]: I1129 07:48:40.965020 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6bc\" (UniqueName: \"kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc\") pod \"certified-operators-6t2v9\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:41 crc kubenswrapper[4828]: I1129 07:48:41.156922 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:41 crc kubenswrapper[4828]: I1129 07:48:41.718665 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:41 crc kubenswrapper[4828]: W1129 07:48:41.725010 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e871842_8326_4072_b419_b8b68fa3c161.slice/crio-e43196adb6142cdeda44cfdff2bb5e68f6ab1a98dfeac7c5104b2ae1ee5fa78c WatchSource:0}: Error finding container e43196adb6142cdeda44cfdff2bb5e68f6ab1a98dfeac7c5104b2ae1ee5fa78c: Status 404 returned error can't find the container with id e43196adb6142cdeda44cfdff2bb5e68f6ab1a98dfeac7c5104b2ae1ee5fa78c Nov 29 07:48:41 crc kubenswrapper[4828]: I1129 07:48:41.788555 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerStarted","Data":"e43196adb6142cdeda44cfdff2bb5e68f6ab1a98dfeac7c5104b2ae1ee5fa78c"} Nov 29 07:48:42 crc kubenswrapper[4828]: I1129 07:48:42.806763 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e871842-8326-4072-b419-b8b68fa3c161" containerID="26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589" exitCode=0 Nov 29 07:48:42 crc kubenswrapper[4828]: I1129 07:48:42.807388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerDied","Data":"26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589"} Nov 29 07:48:43 crc kubenswrapper[4828]: I1129 07:48:43.816300 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerStarted","Data":"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec"} Nov 29 07:48:44 crc kubenswrapper[4828]: I1129 07:48:44.826225 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e871842-8326-4072-b419-b8b68fa3c161" containerID="b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec" exitCode=0 Nov 29 07:48:44 crc kubenswrapper[4828]: I1129 07:48:44.826303 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerDied","Data":"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec"} Nov 29 07:48:45 crc kubenswrapper[4828]: I1129 07:48:45.843693 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerStarted","Data":"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb"} Nov 29 07:48:45 crc kubenswrapper[4828]: I1129 07:48:45.866720 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6t2v9" podStartSLOduration=3.461652683 podStartE2EDuration="5.866701785s" podCreationTimestamp="2025-11-29 07:48:40 +0000 UTC" firstStartedPulling="2025-11-29 07:48:42.811357803 +0000 UTC m=+2862.433433851" lastFinishedPulling="2025-11-29 07:48:45.216406895 +0000 UTC m=+2864.838482953" observedRunningTime="2025-11-29 07:48:45.859687575 +0000 UTC m=+2865.481763633" watchObservedRunningTime="2025-11-29 07:48:45.866701785 +0000 UTC m=+2865.488777843" Nov 29 07:48:51 crc kubenswrapper[4828]: I1129 07:48:51.157455 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:51 crc kubenswrapper[4828]: I1129 07:48:51.158599 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:51 crc kubenswrapper[4828]: I1129 07:48:51.212359 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:51 crc kubenswrapper[4828]: I1129 07:48:51.935037 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:51 crc kubenswrapper[4828]: I1129 07:48:51.983238 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:53 crc kubenswrapper[4828]: I1129 07:48:53.908775 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6t2v9" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="registry-server" containerID="cri-o://73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb" gracePeriod=2 Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.399998 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.508264 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content\") pod \"5e871842-8326-4072-b419-b8b68fa3c161\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.508481 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6bc\" (UniqueName: \"kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc\") pod \"5e871842-8326-4072-b419-b8b68fa3c161\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.508537 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities\") pod \"5e871842-8326-4072-b419-b8b68fa3c161\" (UID: \"5e871842-8326-4072-b419-b8b68fa3c161\") " Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.509562 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities" (OuterVolumeSpecName: "utilities") pod "5e871842-8326-4072-b419-b8b68fa3c161" (UID: "5e871842-8326-4072-b419-b8b68fa3c161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.514504 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc" (OuterVolumeSpecName: "kube-api-access-wn6bc") pod "5e871842-8326-4072-b419-b8b68fa3c161" (UID: "5e871842-8326-4072-b419-b8b68fa3c161"). InnerVolumeSpecName "kube-api-access-wn6bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.554490 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e871842-8326-4072-b419-b8b68fa3c161" (UID: "5e871842-8326-4072-b419-b8b68fa3c161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.612075 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.612142 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6bc\" (UniqueName: \"kubernetes.io/projected/5e871842-8326-4072-b419-b8b68fa3c161-kube-api-access-wn6bc\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.612157 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e871842-8326-4072-b419-b8b68fa3c161-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.918702 4828 generic.go:334] "Generic (PLEG): container finished" podID="5e871842-8326-4072-b419-b8b68fa3c161" containerID="73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb" exitCode=0 Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.918742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerDied","Data":"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb"} Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.918772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t2v9" event={"ID":"5e871842-8326-4072-b419-b8b68fa3c161","Type":"ContainerDied","Data":"e43196adb6142cdeda44cfdff2bb5e68f6ab1a98dfeac7c5104b2ae1ee5fa78c"} Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.918788 4828 scope.go:117] "RemoveContainer" containerID="73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.918795 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t2v9" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.954673 4828 scope.go:117] "RemoveContainer" containerID="b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.983919 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.993240 4828 scope.go:117] "RemoveContainer" containerID="26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589" Nov 29 07:48:54 crc kubenswrapper[4828]: I1129 07:48:54.994432 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6t2v9"] Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.050251 4828 scope.go:117] "RemoveContainer" containerID="73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb" Nov 29 07:48:55 crc kubenswrapper[4828]: E1129 07:48:55.050745 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb\": container with ID starting with 73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb not found: ID does not exist" containerID="73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.050868 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb"} err="failed to get container status \"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb\": rpc error: code = NotFound desc = could not find container \"73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb\": container with ID starting with 73eeca88120cd8ee07d0c517d954935d20491603695da7a5225c1cfd17d5a2eb not found: ID does not exist" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.050953 4828 scope.go:117] "RemoveContainer" containerID="b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec" Nov 29 07:48:55 crc kubenswrapper[4828]: E1129 07:48:55.051219 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec\": container with ID starting with b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec not found: ID does not exist" containerID="b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.051340 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec"} err="failed to get container status \"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec\": rpc error: code = NotFound desc = could not find container \"b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec\": container with ID starting with b81732ef78c1044ab311ce965f6559d093758447d39063081744967fa09bb4ec not found: ID does not exist" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.051424 4828 scope.go:117] "RemoveContainer" containerID="26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589" Nov 29 07:48:55 crc kubenswrapper[4828]: E1129 07:48:55.051701 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589\": container with ID starting with 26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589 not found: ID does not exist" containerID="26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.051803 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589"} err="failed to get container status \"26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589\": rpc error: code = NotFound desc = could not find container \"26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589\": container with ID starting with 26363c84c8bdd5bd41d3af5966b543c7d3b6b2a961508f44aebdda32b81ba589 not found: ID does not exist" Nov 29 07:48:55 crc kubenswrapper[4828]: I1129 07:48:55.426086 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e871842-8326-4072-b419-b8b68fa3c161" path="/var/lib/kubelet/pods/5e871842-8326-4072-b419-b8b68fa3c161/volumes" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.038918 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:49:46 crc kubenswrapper[4828]: E1129 07:49:46.041734 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="registry-server" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.041756 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="registry-server" Nov 29 07:49:46 crc kubenswrapper[4828]: E1129 07:49:46.041794 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="extract-utilities" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.041802 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="extract-utilities" Nov 29 07:49:46 crc kubenswrapper[4828]: E1129 07:49:46.041824 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="extract-content" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.041833 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="extract-content" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.042103 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e871842-8326-4072-b419-b8b68fa3c161" containerName="registry-server" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.044104 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.049161 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.115109 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.115555 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.115609 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6sd\" (UniqueName: \"kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.217157 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.217229 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6sd\" (UniqueName: \"kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.217347 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.217867 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.217863 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.238745 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6sd\" (UniqueName: \"kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd\") pod \"community-operators-wb95z\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:46 crc kubenswrapper[4828]: I1129 07:49:46.367804 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:47 crc kubenswrapper[4828]: I1129 07:49:47.001354 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:49:47 crc kubenswrapper[4828]: I1129 07:49:47.395015 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerStarted","Data":"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23"} Nov 29 07:49:47 crc kubenswrapper[4828]: I1129 07:49:47.395340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerStarted","Data":"d12453125b9e1f2c732451e601f75eac27f1c78bfb271786d2afe51a505352f4"} Nov 29 07:49:48 crc kubenswrapper[4828]: I1129 07:49:48.406092 4828 generic.go:334] "Generic (PLEG): container finished" podID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerID="6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23" exitCode=0 Nov 29 07:49:48 crc kubenswrapper[4828]: I1129 07:49:48.406236 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerDied","Data":"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23"} Nov 29 07:49:49 crc kubenswrapper[4828]: I1129 07:49:49.428849 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerStarted","Data":"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273"} Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.215961 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.218469 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.227089 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.313335 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.313461 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.313534 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfmv\" (UniqueName: \"kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.416215 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.416380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmfmv\" (UniqueName: \"kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.416480 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.416812 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.417136 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.438416 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmfmv\" (UniqueName: \"kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv\") pod \"redhat-marketplace-d2fbl\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.443729 4828 generic.go:334] "Generic (PLEG): container finished" podID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerID="59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273" exitCode=0 Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.443785 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerDied","Data":"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273"} Nov 29 07:49:50 crc kubenswrapper[4828]: I1129 07:49:50.594520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:49:51 crc kubenswrapper[4828]: I1129 07:49:51.059498 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:49:51 crc kubenswrapper[4828]: I1129 07:49:51.453130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerStarted","Data":"6736d98312c6572d34a92fedff7aac65dc5671b69675b83e28cf1897283ea14a"} Nov 29 07:49:52 crc kubenswrapper[4828]: I1129 07:49:52.463864 4828 generic.go:334] "Generic (PLEG): container finished" podID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerID="6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b" exitCode=0 Nov 29 07:49:52 crc kubenswrapper[4828]: I1129 07:49:52.463957 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerDied","Data":"6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b"} Nov 29 07:49:52 crc kubenswrapper[4828]: I1129 07:49:52.469908 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerStarted","Data":"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047"} Nov 29 07:49:52 crc kubenswrapper[4828]: I1129 07:49:52.501338 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb95z" podStartSLOduration=2.901401966 podStartE2EDuration="6.501313981s" podCreationTimestamp="2025-11-29 07:49:46 +0000 UTC" firstStartedPulling="2025-11-29 07:49:48.408298682 +0000 UTC m=+2928.030374740" lastFinishedPulling="2025-11-29 07:49:52.008210697 +0000 UTC m=+2931.630286755" observedRunningTime="2025-11-29 07:49:52.500918851 +0000 UTC m=+2932.122994919" watchObservedRunningTime="2025-11-29 07:49:52.501313981 +0000 UTC m=+2932.123390039" Nov 29 07:49:54 crc kubenswrapper[4828]: I1129 07:49:54.495114 4828 generic.go:334] "Generic (PLEG): container finished" podID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerID="3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44" exitCode=0 Nov 29 07:49:54 crc kubenswrapper[4828]: I1129 07:49:54.495787 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerDied","Data":"3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44"} Nov 29 07:49:55 crc kubenswrapper[4828]: I1129 07:49:55.509968 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerStarted","Data":"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb"} Nov 29 07:49:55 crc kubenswrapper[4828]: I1129 07:49:55.531770 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d2fbl" podStartSLOduration=2.9294279960000003 podStartE2EDuration="5.53174833s" podCreationTimestamp="2025-11-29 07:49:50 +0000 UTC" firstStartedPulling="2025-11-29 07:49:52.465868336 +0000 UTC m=+2932.087944394" lastFinishedPulling="2025-11-29 07:49:55.06818867 +0000 UTC m=+2934.690264728" observedRunningTime="2025-11-29 07:49:55.527260956 +0000 UTC m=+2935.149337034" watchObservedRunningTime="2025-11-29 07:49:55.53174833 +0000 UTC m=+2935.153824388" Nov 29 07:49:56 crc kubenswrapper[4828]: I1129 07:49:56.368344 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:56 crc kubenswrapper[4828]: I1129 07:49:56.368390 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:49:56 crc kubenswrapper[4828]: I1129 07:49:56.416137 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:50:00 crc kubenswrapper[4828]: I1129 07:50:00.595316 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:00 crc kubenswrapper[4828]: I1129 07:50:00.595860 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:00 crc kubenswrapper[4828]: I1129 07:50:00.646867 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:01 crc kubenswrapper[4828]: I1129 07:50:01.615663 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:01 crc kubenswrapper[4828]: I1129 07:50:01.660958 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:50:03 crc kubenswrapper[4828]: I1129 07:50:03.584536 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d2fbl" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="registry-server" containerID="cri-o://2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb" gracePeriod=2 Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.049305 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.195398 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities\") pod \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.195482 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmfmv\" (UniqueName: \"kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv\") pod \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.195521 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content\") pod \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\" (UID: \"96b5bce7-4999-4bed-84dd-ca11c052c0c0\") " Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.196398 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities" (OuterVolumeSpecName: "utilities") pod "96b5bce7-4999-4bed-84dd-ca11c052c0c0" (UID: "96b5bce7-4999-4bed-84dd-ca11c052c0c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.204363 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.206648 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv" (OuterVolumeSpecName: "kube-api-access-qmfmv") pod "96b5bce7-4999-4bed-84dd-ca11c052c0c0" (UID: "96b5bce7-4999-4bed-84dd-ca11c052c0c0"). InnerVolumeSpecName "kube-api-access-qmfmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.218156 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96b5bce7-4999-4bed-84dd-ca11c052c0c0" (UID: "96b5bce7-4999-4bed-84dd-ca11c052c0c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.306134 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmfmv\" (UniqueName: \"kubernetes.io/projected/96b5bce7-4999-4bed-84dd-ca11c052c0c0-kube-api-access-qmfmv\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.306193 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b5bce7-4999-4bed-84dd-ca11c052c0c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.595359 4828 generic.go:334] "Generic (PLEG): container finished" podID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerID="2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb" exitCode=0 Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.595483 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerDied","Data":"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb"} Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.595656 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2fbl" event={"ID":"96b5bce7-4999-4bed-84dd-ca11c052c0c0","Type":"ContainerDied","Data":"6736d98312c6572d34a92fedff7aac65dc5671b69675b83e28cf1897283ea14a"} Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.595681 4828 scope.go:117] "RemoveContainer" containerID="2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.596357 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2fbl" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.618321 4828 scope.go:117] "RemoveContainer" containerID="3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.642340 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.655775 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2fbl"] Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.662892 4828 scope.go:117] "RemoveContainer" containerID="6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.696064 4828 scope.go:117] "RemoveContainer" containerID="2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb" Nov 29 07:50:04 crc kubenswrapper[4828]: E1129 07:50:04.696645 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb\": container with ID starting with 2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb not found: ID does not exist" containerID="2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.696718 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb"} err="failed to get container status \"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb\": rpc error: code = NotFound desc = could not find container \"2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb\": container with ID starting with 2e5d6ccce3a19756fa95554c5397d77dc42b477a6c0579843626468774d29bbb not found: ID does not exist" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.696748 4828 scope.go:117] "RemoveContainer" containerID="3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44" Nov 29 07:50:04 crc kubenswrapper[4828]: E1129 07:50:04.697158 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44\": container with ID starting with 3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44 not found: ID does not exist" containerID="3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.697211 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44"} err="failed to get container status \"3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44\": rpc error: code = NotFound desc = could not find container \"3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44\": container with ID starting with 3347d8c5e5256646fcacecb64e6ba8d58b70ad2d5c8db2c649b1e3f56763ca44 not found: ID does not exist" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.697240 4828 scope.go:117] "RemoveContainer" containerID="6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b" Nov 29 07:50:04 crc kubenswrapper[4828]: E1129 07:50:04.697709 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b\": container with ID starting with 6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b not found: ID does not exist" containerID="6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b" Nov 29 07:50:04 crc kubenswrapper[4828]: I1129 07:50:04.697741 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b"} err="failed to get container status \"6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b\": rpc error: code = NotFound desc = could not find container \"6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b\": container with ID starting with 6509fb13f7142367fb43ae41f47732be5f9def7ca9ff5216165ce79572de5f7b not found: ID does not exist" Nov 29 07:50:05 crc kubenswrapper[4828]: I1129 07:50:05.424460 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" path="/var/lib/kubelet/pods/96b5bce7-4999-4bed-84dd-ca11c052c0c0/volumes" Nov 29 07:50:06 crc kubenswrapper[4828]: I1129 07:50:06.418546 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:50:06 crc kubenswrapper[4828]: I1129 07:50:06.473824 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:50:06 crc kubenswrapper[4828]: I1129 07:50:06.614032 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb95z" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="registry-server" containerID="cri-o://802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047" gracePeriod=2 Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.103298 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.258471 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6sd\" (UniqueName: \"kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd\") pod \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.258589 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities\") pod \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.258963 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content\") pod \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\" (UID: \"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1\") " Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.259849 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities" (OuterVolumeSpecName: "utilities") pod "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" (UID: "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.301524 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd" (OuterVolumeSpecName: "kube-api-access-gn6sd") pod "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" (UID: "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1"). InnerVolumeSpecName "kube-api-access-gn6sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.340966 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" (UID: "6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.361825 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.361866 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6sd\" (UniqueName: \"kubernetes.io/projected/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-kube-api-access-gn6sd\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.361877 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.625871 4828 generic.go:334] "Generic (PLEG): container finished" podID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerID="802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047" exitCode=0 Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.625920 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerDied","Data":"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047"} Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.625981 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb95z" event={"ID":"6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1","Type":"ContainerDied","Data":"d12453125b9e1f2c732451e601f75eac27f1c78bfb271786d2afe51a505352f4"} Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.626001 4828 scope.go:117] "RemoveContainer" containerID="802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.626004 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb95z" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.655702 4828 scope.go:117] "RemoveContainer" containerID="59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.664581 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.675426 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb95z"] Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.685612 4828 scope.go:117] "RemoveContainer" containerID="6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.725818 4828 scope.go:117] "RemoveContainer" containerID="802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047" Nov 29 07:50:07 crc kubenswrapper[4828]: E1129 07:50:07.726574 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047\": container with ID starting with 802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047 not found: ID does not exist" containerID="802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.726638 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047"} err="failed to get container status \"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047\": rpc error: code = NotFound desc = could not find container \"802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047\": container with ID starting with 802c17f1a729b8a71d61d99386cd077ca310647d8c76187a028af109ea192047 not found: ID does not exist" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.726672 4828 scope.go:117] "RemoveContainer" containerID="59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273" Nov 29 07:50:07 crc kubenswrapper[4828]: E1129 07:50:07.727505 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273\": container with ID starting with 59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273 not found: ID does not exist" containerID="59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.727573 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273"} err="failed to get container status \"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273\": rpc error: code = NotFound desc = could not find container \"59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273\": container with ID starting with 59bd73855fe6d74ad4f0efff74ac4979d86e1db67b898133e5891ca44d4a8273 not found: ID does not exist" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.727609 4828 scope.go:117] "RemoveContainer" containerID="6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23" Nov 29 07:50:07 crc kubenswrapper[4828]: E1129 07:50:07.728765 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23\": container with ID starting with 6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23 not found: ID does not exist" containerID="6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23" Nov 29 07:50:07 crc kubenswrapper[4828]: I1129 07:50:07.728864 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23"} err="failed to get container status \"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23\": rpc error: code = NotFound desc = could not find container \"6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23\": container with ID starting with 6cf287b3c542c3ec30e1c183dae76319611d2a705e02abb07c88750fecef0c23 not found: ID does not exist" Nov 29 07:50:09 crc kubenswrapper[4828]: I1129 07:50:09.426081 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" path="/var/lib/kubelet/pods/6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1/volumes" Nov 29 07:50:11 crc kubenswrapper[4828]: I1129 07:50:11.487769 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:11 crc kubenswrapper[4828]: I1129 07:50:11.488149 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:41 crc kubenswrapper[4828]: I1129 07:50:41.487403 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:41 crc kubenswrapper[4828]: I1129 07:50:41.488014 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:51:11 crc kubenswrapper[4828]: I1129 07:51:11.487316 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:51:11 crc kubenswrapper[4828]: I1129 07:51:11.487882 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:51:11 crc kubenswrapper[4828]: I1129 07:51:11.487953 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:51:11 crc kubenswrapper[4828]: I1129 07:51:11.488898 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:51:11 crc kubenswrapper[4828]: I1129 07:51:11.488967 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" gracePeriod=600 Nov 29 07:51:11 crc kubenswrapper[4828]: E1129 07:51:11.786214 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:51:12 crc kubenswrapper[4828]: I1129 07:51:12.246194 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" exitCode=0 Nov 29 07:51:12 crc kubenswrapper[4828]: I1129 07:51:12.246241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456"} Nov 29 07:51:12 crc kubenswrapper[4828]: I1129 07:51:12.246288 4828 scope.go:117] "RemoveContainer" containerID="3cb6f348dbeb37c2a6d7f1ae1d1bcd52a80eb94fa3cab19b67e268a4200539bc" Nov 29 07:51:12 crc kubenswrapper[4828]: I1129 07:51:12.247142 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:51:12 crc kubenswrapper[4828]: E1129 07:51:12.247614 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:51:24 crc kubenswrapper[4828]: I1129 07:51:24.411820 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:51:24 crc kubenswrapper[4828]: E1129 07:51:24.412728 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:51:36 crc kubenswrapper[4828]: I1129 07:51:36.412426 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:51:36 crc kubenswrapper[4828]: E1129 07:51:36.413242 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:51:38 crc kubenswrapper[4828]: I1129 07:51:38.468511 4828 generic.go:334] "Generic (PLEG): container finished" podID="839a08fc-14bb-4b73-8028-6dec803de923" containerID="2ca9ee34f0ee47af6d7c27e34d75de29e37f7840c9e0032b80fdea68ca8b0f1b" exitCode=0 Nov 29 07:51:38 crc kubenswrapper[4828]: I1129 07:51:38.468666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" event={"ID":"839a08fc-14bb-4b73-8028-6dec803de923","Type":"ContainerDied","Data":"2ca9ee34f0ee47af6d7c27e34d75de29e37f7840c9e0032b80fdea68ca8b0f1b"} Nov 29 07:51:39 crc kubenswrapper[4828]: I1129 07:51:39.902561 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029575 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029626 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029718 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029821 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029885 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.029923 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.030446 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.030569 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzzc\" (UniqueName: \"kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.030615 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1\") pod \"839a08fc-14bb-4b73-8028-6dec803de923\" (UID: \"839a08fc-14bb-4b73-8028-6dec803de923\") " Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.036197 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.036289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc" (OuterVolumeSpecName: "kube-api-access-xfzzc") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "kube-api-access-xfzzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.062574 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.064208 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.065962 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.066473 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.068358 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.068960 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory" (OuterVolumeSpecName: "inventory") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.071914 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "839a08fc-14bb-4b73-8028-6dec803de923" (UID: "839a08fc-14bb-4b73-8028-6dec803de923"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133629 4828 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133674 4828 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133689 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133702 4828 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/839a08fc-14bb-4b73-8028-6dec803de923-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133714 4828 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133728 4828 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133741 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133752 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfzzc\" (UniqueName: \"kubernetes.io/projected/839a08fc-14bb-4b73-8028-6dec803de923-kube-api-access-xfzzc\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.133765 4828 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/839a08fc-14bb-4b73-8028-6dec803de923-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.486959 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" event={"ID":"839a08fc-14bb-4b73-8028-6dec803de923","Type":"ContainerDied","Data":"ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d"} Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.487021 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf87bf8ff44b11db311ad6675cf869b30932f2c73f10eacdb8bc88df3c2211d" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.487075 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xrjp7" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585220 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9"] Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585730 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="extract-utilities" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585753 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="extract-utilities" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585772 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="extract-content" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585779 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="extract-content" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585799 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585805 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585815 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="extract-utilities" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585822 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="extract-utilities" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585839 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839a08fc-14bb-4b73-8028-6dec803de923" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585844 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="839a08fc-14bb-4b73-8028-6dec803de923" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585864 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585870 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: E1129 07:51:40.585882 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="extract-content" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.585889 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="extract-content" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.586118 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="839a08fc-14bb-4b73-8028-6dec803de923" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.586148 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b5bce7-4999-4bed-84dd-ca11c052c0c0" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.586161 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5e6c27-3b49-4135-8ae7-7cd5998ef1f1" containerName="registry-server" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.586942 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.590973 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.591057 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.591318 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.591452 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.591510 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bk6td" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.612530 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9"] Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745217 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745405 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745435 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745477 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745515 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925lg\" (UniqueName: \"kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745608 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.745661 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.848029 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.848406 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.848654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.848793 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-925lg\" (UniqueName: \"kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.848915 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.849038 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.849216 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.852714 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.852723 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.852987 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.853712 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.853763 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.863452 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.875057 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-925lg\" (UniqueName: \"kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:40 crc kubenswrapper[4828]: I1129 07:51:40.906227 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:51:41 crc kubenswrapper[4828]: I1129 07:51:41.447311 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9"] Nov 29 07:51:41 crc kubenswrapper[4828]: I1129 07:51:41.500229 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" event={"ID":"38983969-7980-489d-973e-2d4bc3de2420","Type":"ContainerStarted","Data":"dbad0b8f78f8856c3ecd2090d23241f6987a3e063a8c9c0f0339e3b9256a6201"} Nov 29 07:51:42 crc kubenswrapper[4828]: I1129 07:51:42.511603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" event={"ID":"38983969-7980-489d-973e-2d4bc3de2420","Type":"ContainerStarted","Data":"ffc4325ea76005db278afec1647054c3e28ac95d8817acf594891da62ef041e8"} Nov 29 07:51:42 crc kubenswrapper[4828]: I1129 07:51:42.540123 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" podStartSLOduration=2.030379687 podStartE2EDuration="2.540091978s" podCreationTimestamp="2025-11-29 07:51:40 +0000 UTC" firstStartedPulling="2025-11-29 07:51:41.449853531 +0000 UTC m=+3041.071929589" lastFinishedPulling="2025-11-29 07:51:41.959565822 +0000 UTC m=+3041.581641880" observedRunningTime="2025-11-29 07:51:42.53231736 +0000 UTC m=+3042.154393418" watchObservedRunningTime="2025-11-29 07:51:42.540091978 +0000 UTC m=+3042.162168046" Nov 29 07:51:47 crc kubenswrapper[4828]: I1129 07:51:47.412924 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:51:47 crc kubenswrapper[4828]: E1129 07:51:47.413643 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:51:58 crc kubenswrapper[4828]: I1129 07:51:58.412153 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:51:58 crc kubenswrapper[4828]: E1129 07:51:58.413699 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:52:10 crc kubenswrapper[4828]: I1129 07:52:10.412696 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:52:10 crc kubenswrapper[4828]: E1129 07:52:10.413664 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:52:24 crc kubenswrapper[4828]: I1129 07:52:24.412249 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:52:24 crc kubenswrapper[4828]: E1129 07:52:24.413101 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:52:35 crc kubenswrapper[4828]: I1129 07:52:35.413139 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:52:35 crc kubenswrapper[4828]: E1129 07:52:35.414027 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:52:48 crc kubenswrapper[4828]: I1129 07:52:48.412013 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:52:48 crc kubenswrapper[4828]: E1129 07:52:48.412686 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:53:00 crc kubenswrapper[4828]: I1129 07:53:00.411478 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:53:00 crc kubenswrapper[4828]: E1129 07:53:00.412501 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:53:12 crc kubenswrapper[4828]: I1129 07:53:12.423582 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:53:12 crc kubenswrapper[4828]: E1129 07:53:12.424914 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:53:23 crc kubenswrapper[4828]: I1129 07:53:23.412017 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:53:23 crc kubenswrapper[4828]: E1129 07:53:23.412979 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:53:38 crc kubenswrapper[4828]: I1129 07:53:38.412817 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:53:38 crc kubenswrapper[4828]: E1129 07:53:38.413578 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:53:52 crc kubenswrapper[4828]: I1129 07:53:52.411993 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:53:52 crc kubenswrapper[4828]: E1129 07:53:52.412761 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:54:04 crc kubenswrapper[4828]: I1129 07:54:04.412696 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:54:04 crc kubenswrapper[4828]: E1129 07:54:04.413485 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:54:08 crc kubenswrapper[4828]: I1129 07:54:08.939984 4828 generic.go:334] "Generic (PLEG): container finished" podID="38983969-7980-489d-973e-2d4bc3de2420" containerID="ffc4325ea76005db278afec1647054c3e28ac95d8817acf594891da62ef041e8" exitCode=0 Nov 29 07:54:08 crc kubenswrapper[4828]: I1129 07:54:08.940192 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" event={"ID":"38983969-7980-489d-973e-2d4bc3de2420","Type":"ContainerDied","Data":"ffc4325ea76005db278afec1647054c3e28ac95d8817acf594891da62ef041e8"} Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.339889 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.418768 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.418843 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.418870 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.418899 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.418930 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-925lg\" (UniqueName: \"kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.419009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.419128 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle\") pod \"38983969-7980-489d-973e-2d4bc3de2420\" (UID: \"38983969-7980-489d-973e-2d4bc3de2420\") " Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.425324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.427573 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg" (OuterVolumeSpecName: "kube-api-access-925lg") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "kube-api-access-925lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.453413 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory" (OuterVolumeSpecName: "inventory") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.453801 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.455235 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.457520 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.468690 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "38983969-7980-489d-973e-2d4bc3de2420" (UID: "38983969-7980-489d-973e-2d4bc3de2420"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.521965 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522318 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522332 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522347 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522360 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-925lg\" (UniqueName: \"kubernetes.io/projected/38983969-7980-489d-973e-2d4bc3de2420-kube-api-access-925lg\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522372 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.522382 4828 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38983969-7980-489d-973e-2d4bc3de2420-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.957766 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" event={"ID":"38983969-7980-489d-973e-2d4bc3de2420","Type":"ContainerDied","Data":"dbad0b8f78f8856c3ecd2090d23241f6987a3e063a8c9c0f0339e3b9256a6201"} Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.957824 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbad0b8f78f8856c3ecd2090d23241f6987a3e063a8c9c0f0339e3b9256a6201" Nov 29 07:54:10 crc kubenswrapper[4828]: I1129 07:54:10.957833 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9" Nov 29 07:54:19 crc kubenswrapper[4828]: I1129 07:54:19.415043 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:54:19 crc kubenswrapper[4828]: E1129 07:54:19.416781 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:54:32 crc kubenswrapper[4828]: I1129 07:54:32.411409 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:54:32 crc kubenswrapper[4828]: E1129 07:54:32.412305 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:54:47 crc kubenswrapper[4828]: I1129 07:54:47.411752 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:54:47 crc kubenswrapper[4828]: E1129 07:54:47.414294 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:54:58 crc kubenswrapper[4828]: I1129 07:54:58.412226 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:54:58 crc kubenswrapper[4828]: E1129 07:54:58.413055 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.914021 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:01 crc kubenswrapper[4828]: E1129 07:55:01.915116 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38983969-7980-489d-973e-2d4bc3de2420" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.915141 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="38983969-7980-489d-973e-2d4bc3de2420" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.915452 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="38983969-7980-489d-973e-2d4bc3de2420" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.916196 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.918346 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.918573 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.918615 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7x6kc" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.918827 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.922748 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995356 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995458 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995516 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995533 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-542l9\" (UniqueName: \"kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995604 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995649 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995680 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995717 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:01 crc kubenswrapper[4828]: I1129 07:55:01.995739 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.097779 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.098100 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099001 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099046 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-542l9\" (UniqueName: \"kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099202 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099300 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099364 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099475 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099508 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099662 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099699 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.099955 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.100102 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.100738 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.105500 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.105661 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.106515 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.121164 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-542l9\" (UniqueName: \"kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.130855 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.246092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.770085 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:02 crc kubenswrapper[4828]: I1129 07:55:02.770587 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:55:03 crc kubenswrapper[4828]: I1129 07:55:03.523241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da","Type":"ContainerStarted","Data":"3045907357aabad916740a40f8c8e09d1a0d3f185d4ac3f42d73b0d75a7620ae"} Nov 29 07:55:09 crc kubenswrapper[4828]: I1129 07:55:09.411524 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:55:09 crc kubenswrapper[4828]: E1129 07:55:09.412030 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:55:23 crc kubenswrapper[4828]: I1129 07:55:23.421087 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:55:23 crc kubenswrapper[4828]: E1129 07:55:23.421987 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:55:34 crc kubenswrapper[4828]: E1129 07:55:34.799442 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 29 07:55:34 crc kubenswrapper[4828]: E1129 07:55:34.801097 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-542l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:55:34 crc kubenswrapper[4828]: E1129 07:55:34.802369 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" Nov 29 07:55:34 crc kubenswrapper[4828]: E1129 07:55:34.886219 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" Nov 29 07:55:36 crc kubenswrapper[4828]: I1129 07:55:36.411709 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:55:36 crc kubenswrapper[4828]: E1129 07:55:36.412204 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:55:50 crc kubenswrapper[4828]: I1129 07:55:50.411779 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:55:50 crc kubenswrapper[4828]: E1129 07:55:50.412833 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:55:54 crc kubenswrapper[4828]: I1129 07:55:54.063628 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da","Type":"ContainerStarted","Data":"cce125a4c8e28fcfcf32672d4bb6eeb76c07918f56ef2960f9931bee22a717dd"} Nov 29 07:55:54 crc kubenswrapper[4828]: I1129 07:55:54.095301 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.427127514 podStartE2EDuration="54.09522131s" podCreationTimestamp="2025-11-29 07:55:00 +0000 UTC" firstStartedPulling="2025-11-29 07:55:02.770362305 +0000 UTC m=+3242.392438363" lastFinishedPulling="2025-11-29 07:55:52.438456101 +0000 UTC m=+3292.060532159" observedRunningTime="2025-11-29 07:55:54.083605615 +0000 UTC m=+3293.705681673" watchObservedRunningTime="2025-11-29 07:55:54.09522131 +0000 UTC m=+3293.717297368" Nov 29 07:56:03 crc kubenswrapper[4828]: I1129 07:56:03.413708 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:56:03 crc kubenswrapper[4828]: E1129 07:56:03.414696 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 07:56:18 crc kubenswrapper[4828]: I1129 07:56:18.411578 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:56:19 crc kubenswrapper[4828]: I1129 07:56:19.308033 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28"} Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.596140 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.599312 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.634311 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.713946 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsxz\" (UniqueName: \"kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.714018 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.714047 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.815551 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.815645 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.815853 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfsxz\" (UniqueName: \"kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.816312 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.816539 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.857387 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfsxz\" (UniqueName: \"kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz\") pod \"redhat-operators-4ktw9\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:18 crc kubenswrapper[4828]: I1129 07:58:18.920849 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:19 crc kubenswrapper[4828]: I1129 07:58:19.633352 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:20 crc kubenswrapper[4828]: I1129 07:58:20.499556 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerStarted","Data":"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c"} Nov 29 07:58:20 crc kubenswrapper[4828]: I1129 07:58:20.499631 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerStarted","Data":"e3ea83b59505004575a9ae9163d76366242913572077e6ee345c0719ec67c00f"} Nov 29 07:58:21 crc kubenswrapper[4828]: I1129 07:58:21.512098 4828 generic.go:334] "Generic (PLEG): container finished" podID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerID="84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c" exitCode=0 Nov 29 07:58:21 crc kubenswrapper[4828]: I1129 07:58:21.512223 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerDied","Data":"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c"} Nov 29 07:58:23 crc kubenswrapper[4828]: I1129 07:58:23.539036 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerStarted","Data":"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455"} Nov 29 07:58:26 crc kubenswrapper[4828]: I1129 07:58:26.566883 4828 generic.go:334] "Generic (PLEG): container finished" podID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerID="6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455" exitCode=0 Nov 29 07:58:26 crc kubenswrapper[4828]: I1129 07:58:26.567009 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerDied","Data":"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455"} Nov 29 07:58:28 crc kubenswrapper[4828]: I1129 07:58:28.600935 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerStarted","Data":"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8"} Nov 29 07:58:28 crc kubenswrapper[4828]: I1129 07:58:28.639324 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ktw9" podStartSLOduration=4.313363671 podStartE2EDuration="10.639241349s" podCreationTimestamp="2025-11-29 07:58:18 +0000 UTC" firstStartedPulling="2025-11-29 07:58:21.516420864 +0000 UTC m=+3441.138496922" lastFinishedPulling="2025-11-29 07:58:27.842298542 +0000 UTC m=+3447.464374600" observedRunningTime="2025-11-29 07:58:28.633096143 +0000 UTC m=+3448.255172211" watchObservedRunningTime="2025-11-29 07:58:28.639241349 +0000 UTC m=+3448.261317407" Nov 29 07:58:28 crc kubenswrapper[4828]: I1129 07:58:28.921161 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:28 crc kubenswrapper[4828]: I1129 07:58:28.921211 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:29 crc kubenswrapper[4828]: I1129 07:58:29.974979 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ktw9" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="registry-server" probeResult="failure" output=< Nov 29 07:58:29 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 07:58:29 crc kubenswrapper[4828]: > Nov 29 07:58:38 crc kubenswrapper[4828]: I1129 07:58:38.975984 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:39 crc kubenswrapper[4828]: I1129 07:58:39.079337 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:39 crc kubenswrapper[4828]: I1129 07:58:39.264873 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:40 crc kubenswrapper[4828]: I1129 07:58:40.715110 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ktw9" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="registry-server" containerID="cri-o://a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8" gracePeriod=2 Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.486544 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.486906 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.487776 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.539816 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfsxz\" (UniqueName: \"kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz\") pod \"089dc74a-5533-42b0-92e6-4dd5164749b4\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.540103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities\") pod \"089dc74a-5533-42b0-92e6-4dd5164749b4\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.540132 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content\") pod \"089dc74a-5533-42b0-92e6-4dd5164749b4\" (UID: \"089dc74a-5533-42b0-92e6-4dd5164749b4\") " Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.541158 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities" (OuterVolumeSpecName: "utilities") pod "089dc74a-5533-42b0-92e6-4dd5164749b4" (UID: "089dc74a-5533-42b0-92e6-4dd5164749b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.561556 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz" (OuterVolumeSpecName: "kube-api-access-gfsxz") pod "089dc74a-5533-42b0-92e6-4dd5164749b4" (UID: "089dc74a-5533-42b0-92e6-4dd5164749b4"). InnerVolumeSpecName "kube-api-access-gfsxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.643292 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.643345 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfsxz\" (UniqueName: \"kubernetes.io/projected/089dc74a-5533-42b0-92e6-4dd5164749b4-kube-api-access-gfsxz\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.679609 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "089dc74a-5533-42b0-92e6-4dd5164749b4" (UID: "089dc74a-5533-42b0-92e6-4dd5164749b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.726154 4828 generic.go:334] "Generic (PLEG): container finished" podID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerID="a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8" exitCode=0 Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.726200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerDied","Data":"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8"} Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.726227 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ktw9" event={"ID":"089dc74a-5533-42b0-92e6-4dd5164749b4","Type":"ContainerDied","Data":"e3ea83b59505004575a9ae9163d76366242913572077e6ee345c0719ec67c00f"} Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.726246 4828 scope.go:117] "RemoveContainer" containerID="a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.726297 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ktw9" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.747977 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089dc74a-5533-42b0-92e6-4dd5164749b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.769864 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.771620 4828 scope.go:117] "RemoveContainer" containerID="6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.781065 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ktw9"] Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.806941 4828 scope.go:117] "RemoveContainer" containerID="84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.847903 4828 scope.go:117] "RemoveContainer" containerID="a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8" Nov 29 07:58:41 crc kubenswrapper[4828]: E1129 07:58:41.848438 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8\": container with ID starting with a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8 not found: ID does not exist" containerID="a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.848495 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8"} err="failed to get container status \"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8\": rpc error: code = NotFound desc = could not find container \"a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8\": container with ID starting with a361d1e5900bc7061827fac792ee3eaf8816e07356bde9d1ef77164df55326e8 not found: ID does not exist" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.848525 4828 scope.go:117] "RemoveContainer" containerID="6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455" Nov 29 07:58:41 crc kubenswrapper[4828]: E1129 07:58:41.849057 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455\": container with ID starting with 6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455 not found: ID does not exist" containerID="6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.849250 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455"} err="failed to get container status \"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455\": rpc error: code = NotFound desc = could not find container \"6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455\": container with ID starting with 6c85399afd02986c097df37dcb7a4e68810b4bd5e32d404850a903b2b6a52455 not found: ID does not exist" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.849374 4828 scope.go:117] "RemoveContainer" containerID="84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c" Nov 29 07:58:41 crc kubenswrapper[4828]: E1129 07:58:41.850363 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c\": container with ID starting with 84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c not found: ID does not exist" containerID="84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c" Nov 29 07:58:41 crc kubenswrapper[4828]: I1129 07:58:41.850403 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c"} err="failed to get container status \"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c\": rpc error: code = NotFound desc = could not find container \"84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c\": container with ID starting with 84125006e05dfd89ebe1846398449574af22bf1b47cb16026b0410e8cc62dc9c not found: ID does not exist" Nov 29 07:58:43 crc kubenswrapper[4828]: I1129 07:58:43.422338 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" path="/var/lib/kubelet/pods/089dc74a-5533-42b0-92e6-4dd5164749b4/volumes" Nov 29 07:59:11 crc kubenswrapper[4828]: I1129 07:59:11.487089 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:59:11 crc kubenswrapper[4828]: I1129 07:59:11.487760 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:59:41 crc kubenswrapper[4828]: I1129 07:59:41.487624 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:59:41 crc kubenswrapper[4828]: I1129 07:59:41.488195 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:59:41 crc kubenswrapper[4828]: I1129 07:59:41.488320 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 07:59:41 crc kubenswrapper[4828]: I1129 07:59:41.489390 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:59:41 crc kubenswrapper[4828]: I1129 07:59:41.489470 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28" gracePeriod=600 Nov 29 07:59:42 crc kubenswrapper[4828]: I1129 07:59:42.291664 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28" exitCode=0 Nov 29 07:59:42 crc kubenswrapper[4828]: I1129 07:59:42.291728 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28"} Nov 29 07:59:42 crc kubenswrapper[4828]: I1129 07:59:42.292331 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969"} Nov 29 07:59:42 crc kubenswrapper[4828]: I1129 07:59:42.292359 4828 scope.go:117] "RemoveContainer" containerID="0bfc41244b27d429633e3847748d86cd69f8c83b5ed6fb1bf03cc760f7aa9456" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.581124 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 07:59:48 crc kubenswrapper[4828]: E1129 07:59:48.582300 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="extract-utilities" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.582321 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="extract-utilities" Nov 29 07:59:48 crc kubenswrapper[4828]: E1129 07:59:48.582344 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="registry-server" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.582353 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="registry-server" Nov 29 07:59:48 crc kubenswrapper[4828]: E1129 07:59:48.582406 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="extract-content" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.582414 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="extract-content" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.582727 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="089dc74a-5533-42b0-92e6-4dd5164749b4" containerName="registry-server" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.584747 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.596749 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.787455 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.787513 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.787733 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2nl\" (UniqueName: \"kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.889743 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2nl\" (UniqueName: \"kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.890165 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.890189 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.890867 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.890866 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.918502 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2nl\" (UniqueName: \"kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl\") pod \"certified-operators-47wvx\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:48 crc kubenswrapper[4828]: I1129 07:59:48.921612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:49 crc kubenswrapper[4828]: I1129 07:59:49.503257 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 07:59:50 crc kubenswrapper[4828]: I1129 07:59:50.511969 4828 generic.go:334] "Generic (PLEG): container finished" podID="be102436-6def-4c8c-a81e-198d2bf769a2" containerID="06915d9d20fc5d5e592505bb8393b7df7c08d8557726258946d5cec8382a2f91" exitCode=0 Nov 29 07:59:50 crc kubenswrapper[4828]: I1129 07:59:50.512181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerDied","Data":"06915d9d20fc5d5e592505bb8393b7df7c08d8557726258946d5cec8382a2f91"} Nov 29 07:59:50 crc kubenswrapper[4828]: I1129 07:59:50.512427 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerStarted","Data":"42caefaed3ba880b3fbcb296156004569d0726895c8cd05ae52dccb7d7ef0100"} Nov 29 07:59:52 crc kubenswrapper[4828]: I1129 07:59:52.536206 4828 generic.go:334] "Generic (PLEG): container finished" podID="be102436-6def-4c8c-a81e-198d2bf769a2" containerID="70e6f12fd4ac63a8c7ad2ceedef116aedf9d61154c6b858a82f7f27d99014abd" exitCode=0 Nov 29 07:59:52 crc kubenswrapper[4828]: I1129 07:59:52.536308 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerDied","Data":"70e6f12fd4ac63a8c7ad2ceedef116aedf9d61154c6b858a82f7f27d99014abd"} Nov 29 07:59:53 crc kubenswrapper[4828]: I1129 07:59:53.555027 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerStarted","Data":"5d73839e359ff24c85411cc7ba5ca86eaf28c321def2ba23ae79031be22fbbc8"} Nov 29 07:59:53 crc kubenswrapper[4828]: I1129 07:59:53.582853 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-47wvx" podStartSLOduration=3.129917711 podStartE2EDuration="5.58281312s" podCreationTimestamp="2025-11-29 07:59:48 +0000 UTC" firstStartedPulling="2025-11-29 07:59:50.516534144 +0000 UTC m=+3530.138610202" lastFinishedPulling="2025-11-29 07:59:52.969429553 +0000 UTC m=+3532.591505611" observedRunningTime="2025-11-29 07:59:53.57330409 +0000 UTC m=+3533.195380158" watchObservedRunningTime="2025-11-29 07:59:53.58281312 +0000 UTC m=+3533.204889178" Nov 29 07:59:58 crc kubenswrapper[4828]: I1129 07:59:58.922702 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:58 crc kubenswrapper[4828]: I1129 07:59:58.923351 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:58 crc kubenswrapper[4828]: I1129 07:59:58.976697 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:59 crc kubenswrapper[4828]: I1129 07:59:59.729893 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 07:59:59 crc kubenswrapper[4828]: I1129 07:59:59.787093 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.156020 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx"] Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.157424 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.160521 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.160556 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.180004 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx"] Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.188243 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.188399 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.188483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxg2\" (UniqueName: \"kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.290093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.290486 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.290542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxg2\" (UniqueName: \"kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.291115 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.298316 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.309603 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxg2\" (UniqueName: \"kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2\") pod \"collect-profiles-29406720-txcqx\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.482982 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:00 crc kubenswrapper[4828]: I1129 08:00:00.992280 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx"] Nov 29 08:00:01 crc kubenswrapper[4828]: I1129 08:00:01.700728 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-47wvx" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="registry-server" containerID="cri-o://5d73839e359ff24c85411cc7ba5ca86eaf28c321def2ba23ae79031be22fbbc8" gracePeriod=2 Nov 29 08:00:01 crc kubenswrapper[4828]: I1129 08:00:01.701318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" event={"ID":"31ac1149-1e29-49d4-bd29-3f41451eaa88","Type":"ContainerStarted","Data":"4ae32afccc105ac3cf1bda6a2e8adb4a5f6e71972f676d0028bdb7fd211fc26a"} Nov 29 08:00:01 crc kubenswrapper[4828]: I1129 08:00:01.701346 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" event={"ID":"31ac1149-1e29-49d4-bd29-3f41451eaa88","Type":"ContainerStarted","Data":"475f55132b8d814f5d49a07041625d86ff9e5f266edd2480575417d5f56e1c87"} Nov 29 08:00:01 crc kubenswrapper[4828]: I1129 08:00:01.727195 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" podStartSLOduration=1.7271749 podStartE2EDuration="1.7271749s" podCreationTimestamp="2025-11-29 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:00:01.725121978 +0000 UTC m=+3541.347198036" watchObservedRunningTime="2025-11-29 08:00:01.7271749 +0000 UTC m=+3541.349250958" Nov 29 08:00:02 crc kubenswrapper[4828]: I1129 08:00:02.711190 4828 generic.go:334] "Generic (PLEG): container finished" podID="31ac1149-1e29-49d4-bd29-3f41451eaa88" containerID="4ae32afccc105ac3cf1bda6a2e8adb4a5f6e71972f676d0028bdb7fd211fc26a" exitCode=0 Nov 29 08:00:02 crc kubenswrapper[4828]: I1129 08:00:02.711263 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" event={"ID":"31ac1149-1e29-49d4-bd29-3f41451eaa88","Type":"ContainerDied","Data":"4ae32afccc105ac3cf1bda6a2e8adb4a5f6e71972f676d0028bdb7fd211fc26a"} Nov 29 08:00:02 crc kubenswrapper[4828]: I1129 08:00:02.714706 4828 generic.go:334] "Generic (PLEG): container finished" podID="be102436-6def-4c8c-a81e-198d2bf769a2" containerID="5d73839e359ff24c85411cc7ba5ca86eaf28c321def2ba23ae79031be22fbbc8" exitCode=0 Nov 29 08:00:02 crc kubenswrapper[4828]: I1129 08:00:02.714758 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerDied","Data":"5d73839e359ff24c85411cc7ba5ca86eaf28c321def2ba23ae79031be22fbbc8"} Nov 29 08:00:02 crc kubenswrapper[4828]: I1129 08:00:02.924227 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.051931 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities\") pod \"be102436-6def-4c8c-a81e-198d2bf769a2\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.052095 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content\") pod \"be102436-6def-4c8c-a81e-198d2bf769a2\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.052133 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf2nl\" (UniqueName: \"kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl\") pod \"be102436-6def-4c8c-a81e-198d2bf769a2\" (UID: \"be102436-6def-4c8c-a81e-198d2bf769a2\") " Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.053468 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities" (OuterVolumeSpecName: "utilities") pod "be102436-6def-4c8c-a81e-198d2bf769a2" (UID: "be102436-6def-4c8c-a81e-198d2bf769a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.060905 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl" (OuterVolumeSpecName: "kube-api-access-hf2nl") pod "be102436-6def-4c8c-a81e-198d2bf769a2" (UID: "be102436-6def-4c8c-a81e-198d2bf769a2"). InnerVolumeSpecName "kube-api-access-hf2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.114310 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be102436-6def-4c8c-a81e-198d2bf769a2" (UID: "be102436-6def-4c8c-a81e-198d2bf769a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.154433 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.154491 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf2nl\" (UniqueName: \"kubernetes.io/projected/be102436-6def-4c8c-a81e-198d2bf769a2-kube-api-access-hf2nl\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.154507 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be102436-6def-4c8c-a81e-198d2bf769a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.730336 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47wvx" event={"ID":"be102436-6def-4c8c-a81e-198d2bf769a2","Type":"ContainerDied","Data":"42caefaed3ba880b3fbcb296156004569d0726895c8cd05ae52dccb7d7ef0100"} Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.730392 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47wvx" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.730730 4828 scope.go:117] "RemoveContainer" containerID="5d73839e359ff24c85411cc7ba5ca86eaf28c321def2ba23ae79031be22fbbc8" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.772286 4828 scope.go:117] "RemoveContainer" containerID="70e6f12fd4ac63a8c7ad2ceedef116aedf9d61154c6b858a82f7f27d99014abd" Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.773301 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.789296 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-47wvx"] Nov 29 08:00:03 crc kubenswrapper[4828]: I1129 08:00:03.810403 4828 scope.go:117] "RemoveContainer" containerID="06915d9d20fc5d5e592505bb8393b7df7c08d8557726258946d5cec8382a2f91" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.224171 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.381228 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume\") pod \"31ac1149-1e29-49d4-bd29-3f41451eaa88\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.381544 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume\") pod \"31ac1149-1e29-49d4-bd29-3f41451eaa88\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.381628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxg2\" (UniqueName: \"kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2\") pod \"31ac1149-1e29-49d4-bd29-3f41451eaa88\" (UID: \"31ac1149-1e29-49d4-bd29-3f41451eaa88\") " Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.383891 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume" (OuterVolumeSpecName: "config-volume") pod "31ac1149-1e29-49d4-bd29-3f41451eaa88" (UID: "31ac1149-1e29-49d4-bd29-3f41451eaa88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.387811 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31ac1149-1e29-49d4-bd29-3f41451eaa88" (UID: "31ac1149-1e29-49d4-bd29-3f41451eaa88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.399303 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2" (OuterVolumeSpecName: "kube-api-access-pqxg2") pod "31ac1149-1e29-49d4-bd29-3f41451eaa88" (UID: "31ac1149-1e29-49d4-bd29-3f41451eaa88"). InnerVolumeSpecName "kube-api-access-pqxg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.489672 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac1149-1e29-49d4-bd29-3f41451eaa88-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.489716 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31ac1149-1e29-49d4-bd29-3f41451eaa88-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.489726 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxg2\" (UniqueName: \"kubernetes.io/projected/31ac1149-1e29-49d4-bd29-3f41451eaa88-kube-api-access-pqxg2\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.543443 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56"] Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.553934 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-2hg56"] Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.741476 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" event={"ID":"31ac1149-1e29-49d4-bd29-3f41451eaa88","Type":"ContainerDied","Data":"475f55132b8d814f5d49a07041625d86ff9e5f266edd2480575417d5f56e1c87"} Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.741563 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475f55132b8d814f5d49a07041625d86ff9e5f266edd2480575417d5f56e1c87" Nov 29 08:00:04 crc kubenswrapper[4828]: I1129 08:00:04.741630 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-txcqx" Nov 29 08:00:05 crc kubenswrapper[4828]: I1129 08:00:05.439017 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a148e4-21ef-4210-9d9a-592a9f5a663c" path="/var/lib/kubelet/pods/23a148e4-21ef-4210-9d9a-592a9f5a663c/volumes" Nov 29 08:00:05 crc kubenswrapper[4828]: I1129 08:00:05.442899 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" path="/var/lib/kubelet/pods/be102436-6def-4c8c-a81e-198d2bf769a2/volumes" Nov 29 08:00:13 crc kubenswrapper[4828]: I1129 08:00:13.325642 4828 scope.go:117] "RemoveContainer" containerID="514102033b9802fc6930d884788b91641b27e5e68d75e484cc5ce8303272e5b7" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.963953 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhkvr"] Nov 29 08:00:24 crc kubenswrapper[4828]: E1129 08:00:24.965246 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="extract-utilities" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965265 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="extract-utilities" Nov 29 08:00:24 crc kubenswrapper[4828]: E1129 08:00:24.965303 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ac1149-1e29-49d4-bd29-3f41451eaa88" containerName="collect-profiles" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965312 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ac1149-1e29-49d4-bd29-3f41451eaa88" containerName="collect-profiles" Nov 29 08:00:24 crc kubenswrapper[4828]: E1129 08:00:24.965368 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="extract-content" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965379 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="extract-content" Nov 29 08:00:24 crc kubenswrapper[4828]: E1129 08:00:24.965409 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="registry-server" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965418 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="registry-server" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965811 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ac1149-1e29-49d4-bd29-3f41451eaa88" containerName="collect-profiles" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.965832 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="be102436-6def-4c8c-a81e-198d2bf769a2" containerName="registry-server" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.967575 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:24 crc kubenswrapper[4828]: I1129 08:00:24.986934 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhkvr"] Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.088566 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-utilities\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.088649 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-catalog-content\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.088844 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx28w\" (UniqueName: \"kubernetes.io/projected/f3a82506-2db4-42bb-9aa7-db19ebf97f06-kube-api-access-nx28w\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.191170 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-utilities\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.191288 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-catalog-content\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.191345 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx28w\" (UniqueName: \"kubernetes.io/projected/f3a82506-2db4-42bb-9aa7-db19ebf97f06-kube-api-access-nx28w\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.191962 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-catalog-content\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.192288 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a82506-2db4-42bb-9aa7-db19ebf97f06-utilities\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.216920 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx28w\" (UniqueName: \"kubernetes.io/projected/f3a82506-2db4-42bb-9aa7-db19ebf97f06-kube-api-access-nx28w\") pod \"community-operators-bhkvr\" (UID: \"f3a82506-2db4-42bb-9aa7-db19ebf97f06\") " pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.313733 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.919937 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhkvr"] Nov 29 08:00:25 crc kubenswrapper[4828]: I1129 08:00:25.967685 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkvr" event={"ID":"f3a82506-2db4-42bb-9aa7-db19ebf97f06","Type":"ContainerStarted","Data":"40e20650d3f937407adbfe12e7a49d335ab53448ea9964fc2299b68b2bb00aba"} Nov 29 08:00:26 crc kubenswrapper[4828]: I1129 08:00:26.983310 4828 generic.go:334] "Generic (PLEG): container finished" podID="f3a82506-2db4-42bb-9aa7-db19ebf97f06" containerID="9e4b2b92854e2739998ebb9e3c002539627aea30a05ce6b5e4700fb258169145" exitCode=0 Nov 29 08:00:26 crc kubenswrapper[4828]: I1129 08:00:26.983424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkvr" event={"ID":"f3a82506-2db4-42bb-9aa7-db19ebf97f06","Type":"ContainerDied","Data":"9e4b2b92854e2739998ebb9e3c002539627aea30a05ce6b5e4700fb258169145"} Nov 29 08:00:26 crc kubenswrapper[4828]: I1129 08:00:26.986501 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:00:33 crc kubenswrapper[4828]: I1129 08:00:33.059459 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkvr" event={"ID":"f3a82506-2db4-42bb-9aa7-db19ebf97f06","Type":"ContainerStarted","Data":"d0c3bfed938d3d362b446ef734087f235cf1ff8330767c108b504a4c9142c978"} Nov 29 08:00:34 crc kubenswrapper[4828]: I1129 08:00:34.070902 4828 generic.go:334] "Generic (PLEG): container finished" podID="f3a82506-2db4-42bb-9aa7-db19ebf97f06" containerID="d0c3bfed938d3d362b446ef734087f235cf1ff8330767c108b504a4c9142c978" exitCode=0 Nov 29 08:00:34 crc kubenswrapper[4828]: I1129 08:00:34.070962 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkvr" event={"ID":"f3a82506-2db4-42bb-9aa7-db19ebf97f06","Type":"ContainerDied","Data":"d0c3bfed938d3d362b446ef734087f235cf1ff8330767c108b504a4c9142c978"} Nov 29 08:00:36 crc kubenswrapper[4828]: I1129 08:00:36.091221 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkvr" event={"ID":"f3a82506-2db4-42bb-9aa7-db19ebf97f06","Type":"ContainerStarted","Data":"12eaea1d79fe6ac55a5a3c94741aabcca853d11c737f05b4c4b039b25a36f0b9"} Nov 29 08:00:36 crc kubenswrapper[4828]: I1129 08:00:36.118651 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhkvr" podStartSLOduration=4.42333931 podStartE2EDuration="12.118611048s" podCreationTimestamp="2025-11-29 08:00:24 +0000 UTC" firstStartedPulling="2025-11-29 08:00:26.986075677 +0000 UTC m=+3566.608151745" lastFinishedPulling="2025-11-29 08:00:34.681347425 +0000 UTC m=+3574.303423483" observedRunningTime="2025-11-29 08:00:36.110935034 +0000 UTC m=+3575.733011112" watchObservedRunningTime="2025-11-29 08:00:36.118611048 +0000 UTC m=+3575.740687106" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.605953 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.608206 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.625981 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.701201 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.701292 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hlhh\" (UniqueName: \"kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.701352 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.803246 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.803353 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hlhh\" (UniqueName: \"kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.803388 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.803929 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.803998 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.825824 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hlhh\" (UniqueName: \"kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh\") pod \"redhat-marketplace-mkbwb\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:37 crc kubenswrapper[4828]: I1129 08:00:37.930466 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:38 crc kubenswrapper[4828]: I1129 08:00:38.520551 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:38 crc kubenswrapper[4828]: W1129 08:00:38.526149 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ba5b89_0997_470b_90ec_a05a380c2b95.slice/crio-87bb52a0dd4caef73c666f55742c073aac7916e990d4ebbed0156376c0ad2ca1 WatchSource:0}: Error finding container 87bb52a0dd4caef73c666f55742c073aac7916e990d4ebbed0156376c0ad2ca1: Status 404 returned error can't find the container with id 87bb52a0dd4caef73c666f55742c073aac7916e990d4ebbed0156376c0ad2ca1 Nov 29 08:00:39 crc kubenswrapper[4828]: I1129 08:00:39.135728 4828 generic.go:334] "Generic (PLEG): container finished" podID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerID="a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7" exitCode=0 Nov 29 08:00:39 crc kubenswrapper[4828]: I1129 08:00:39.135800 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerDied","Data":"a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7"} Nov 29 08:00:39 crc kubenswrapper[4828]: I1129 08:00:39.136028 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerStarted","Data":"87bb52a0dd4caef73c666f55742c073aac7916e990d4ebbed0156376c0ad2ca1"} Nov 29 08:00:41 crc kubenswrapper[4828]: I1129 08:00:41.155175 4828 generic.go:334] "Generic (PLEG): container finished" podID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerID="ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720" exitCode=0 Nov 29 08:00:41 crc kubenswrapper[4828]: I1129 08:00:41.155242 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerDied","Data":"ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720"} Nov 29 08:00:43 crc kubenswrapper[4828]: I1129 08:00:43.174201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerStarted","Data":"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a"} Nov 29 08:00:43 crc kubenswrapper[4828]: I1129 08:00:43.198588 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkbwb" podStartSLOduration=3.056330924 podStartE2EDuration="6.19856783s" podCreationTimestamp="2025-11-29 08:00:37 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.137792994 +0000 UTC m=+3578.759869052" lastFinishedPulling="2025-11-29 08:00:42.2800299 +0000 UTC m=+3581.902105958" observedRunningTime="2025-11-29 08:00:43.190592398 +0000 UTC m=+3582.812668456" watchObservedRunningTime="2025-11-29 08:00:43.19856783 +0000 UTC m=+3582.820643888" Nov 29 08:00:45 crc kubenswrapper[4828]: I1129 08:00:45.314913 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:45 crc kubenswrapper[4828]: I1129 08:00:45.316917 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:45 crc kubenswrapper[4828]: I1129 08:00:45.361603 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:46 crc kubenswrapper[4828]: I1129 08:00:46.250090 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhkvr" Nov 29 08:00:46 crc kubenswrapper[4828]: I1129 08:00:46.326328 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhkvr"] Nov 29 08:00:46 crc kubenswrapper[4828]: I1129 08:00:46.374119 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 08:00:46 crc kubenswrapper[4828]: I1129 08:00:46.374411 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jwbqv" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="registry-server" containerID="cri-o://76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5" gracePeriod=2 Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.155558 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.210378 4828 generic.go:334] "Generic (PLEG): container finished" podID="8705c903-8693-4892-a4c1-d50a086db042" containerID="76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5" exitCode=0 Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.210442 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwbqv" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.210468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerDied","Data":"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5"} Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.210555 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwbqv" event={"ID":"8705c903-8693-4892-a4c1-d50a086db042","Type":"ContainerDied","Data":"ecff091eb4c4219d1f872584c7dd43e98d566bf504ffa2592072770dd6423fa7"} Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.210584 4828 scope.go:117] "RemoveContainer" containerID="76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.241735 4828 scope.go:117] "RemoveContainer" containerID="fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.273459 4828 scope.go:117] "RemoveContainer" containerID="eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.277089 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities\") pod \"8705c903-8693-4892-a4c1-d50a086db042\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.277298 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content\") pod \"8705c903-8693-4892-a4c1-d50a086db042\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.278198 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg8gt\" (UniqueName: \"kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt\") pod \"8705c903-8693-4892-a4c1-d50a086db042\" (UID: \"8705c903-8693-4892-a4c1-d50a086db042\") " Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.281038 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities" (OuterVolumeSpecName: "utilities") pod "8705c903-8693-4892-a4c1-d50a086db042" (UID: "8705c903-8693-4892-a4c1-d50a086db042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.288145 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt" (OuterVolumeSpecName: "kube-api-access-vg8gt") pod "8705c903-8693-4892-a4c1-d50a086db042" (UID: "8705c903-8693-4892-a4c1-d50a086db042"). InnerVolumeSpecName "kube-api-access-vg8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.367615 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8705c903-8693-4892-a4c1-d50a086db042" (UID: "8705c903-8693-4892-a4c1-d50a086db042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.373966 4828 scope.go:117] "RemoveContainer" containerID="76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5" Nov 29 08:00:47 crc kubenswrapper[4828]: E1129 08:00:47.374630 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5\": container with ID starting with 76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5 not found: ID does not exist" containerID="76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.374677 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5"} err="failed to get container status \"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5\": rpc error: code = NotFound desc = could not find container \"76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5\": container with ID starting with 76650ae245a72fec68ef40156c8d8079b3c00df4aafe3baebbc64c2680235bd5 not found: ID does not exist" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.374699 4828 scope.go:117] "RemoveContainer" containerID="fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43" Nov 29 08:00:47 crc kubenswrapper[4828]: E1129 08:00:47.374927 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43\": container with ID starting with fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43 not found: ID does not exist" containerID="fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.374948 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43"} err="failed to get container status \"fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43\": rpc error: code = NotFound desc = could not find container \"fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43\": container with ID starting with fecd65a5b45469ac0f2b3b03e214a600c632169e80122048ef7e5a992859bc43 not found: ID does not exist" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.374961 4828 scope.go:117] "RemoveContainer" containerID="eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f" Nov 29 08:00:47 crc kubenswrapper[4828]: E1129 08:00:47.375151 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f\": container with ID starting with eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f not found: ID does not exist" containerID="eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.375177 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f"} err="failed to get container status \"eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f\": rpc error: code = NotFound desc = could not find container \"eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f\": container with ID starting with eda7bae9a9333e8b84ff65d8ac43eebe2aced8f7b799eb89bac14edad471a62f not found: ID does not exist" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.381244 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg8gt\" (UniqueName: \"kubernetes.io/projected/8705c903-8693-4892-a4c1-d50a086db042-kube-api-access-vg8gt\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.381284 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.381297 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8705c903-8693-4892-a4c1-d50a086db042-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.536051 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.545905 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jwbqv"] Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.930580 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.930950 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:47 crc kubenswrapper[4828]: I1129 08:00:47.986090 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:48 crc kubenswrapper[4828]: I1129 08:00:48.281696 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:49 crc kubenswrapper[4828]: I1129 08:00:49.422779 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8705c903-8693-4892-a4c1-d50a086db042" path="/var/lib/kubelet/pods/8705c903-8693-4892-a4c1-d50a086db042/volumes" Nov 29 08:00:49 crc kubenswrapper[4828]: I1129 08:00:49.804132 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.235758 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkbwb" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="registry-server" containerID="cri-o://eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a" gracePeriod=2 Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.849619 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.950170 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content\") pod \"70ba5b89-0997-470b-90ec-a05a380c2b95\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.950302 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hlhh\" (UniqueName: \"kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh\") pod \"70ba5b89-0997-470b-90ec-a05a380c2b95\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.950498 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities\") pod \"70ba5b89-0997-470b-90ec-a05a380c2b95\" (UID: \"70ba5b89-0997-470b-90ec-a05a380c2b95\") " Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.951219 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities" (OuterVolumeSpecName: "utilities") pod "70ba5b89-0997-470b-90ec-a05a380c2b95" (UID: "70ba5b89-0997-470b-90ec-a05a380c2b95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.956928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh" (OuterVolumeSpecName: "kube-api-access-9hlhh") pod "70ba5b89-0997-470b-90ec-a05a380c2b95" (UID: "70ba5b89-0997-470b-90ec-a05a380c2b95"). InnerVolumeSpecName "kube-api-access-9hlhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:50 crc kubenswrapper[4828]: I1129 08:00:50.969304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ba5b89-0997-470b-90ec-a05a380c2b95" (UID: "70ba5b89-0997-470b-90ec-a05a380c2b95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.052620 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.052916 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ba5b89-0997-470b-90ec-a05a380c2b95-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.052930 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hlhh\" (UniqueName: \"kubernetes.io/projected/70ba5b89-0997-470b-90ec-a05a380c2b95-kube-api-access-9hlhh\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.246390 4828 generic.go:334] "Generic (PLEG): container finished" podID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerID="eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a" exitCode=0 Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.246443 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkbwb" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.246452 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerDied","Data":"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a"} Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.246564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkbwb" event={"ID":"70ba5b89-0997-470b-90ec-a05a380c2b95","Type":"ContainerDied","Data":"87bb52a0dd4caef73c666f55742c073aac7916e990d4ebbed0156376c0ad2ca1"} Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.246580 4828 scope.go:117] "RemoveContainer" containerID="eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.279331 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.286639 4828 scope.go:117] "RemoveContainer" containerID="ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.287581 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkbwb"] Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.314281 4828 scope.go:117] "RemoveContainer" containerID="a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.353886 4828 scope.go:117] "RemoveContainer" containerID="eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a" Nov 29 08:00:51 crc kubenswrapper[4828]: E1129 08:00:51.354491 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a\": container with ID starting with eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a not found: ID does not exist" containerID="eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.354540 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a"} err="failed to get container status \"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a\": rpc error: code = NotFound desc = could not find container \"eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a\": container with ID starting with eee57692354822e42c4547abb931ffe8965d945faaf85e98b1d24f2f8505f03a not found: ID does not exist" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.354578 4828 scope.go:117] "RemoveContainer" containerID="ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720" Nov 29 08:00:51 crc kubenswrapper[4828]: E1129 08:00:51.355017 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720\": container with ID starting with ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720 not found: ID does not exist" containerID="ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.355087 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720"} err="failed to get container status \"ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720\": rpc error: code = NotFound desc = could not find container \"ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720\": container with ID starting with ffeecf3101df56be94efdefad1be71f4696d517336948a90d7e2fa92406b1720 not found: ID does not exist" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.355123 4828 scope.go:117] "RemoveContainer" containerID="a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7" Nov 29 08:00:51 crc kubenswrapper[4828]: E1129 08:00:51.356015 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7\": container with ID starting with a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7 not found: ID does not exist" containerID="a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.356051 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7"} err="failed to get container status \"a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7\": rpc error: code = NotFound desc = could not find container \"a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7\": container with ID starting with a7e27e7f0c51a4594e9f16dda4d3e2695625df75588d9d1e102ecf127bbc59e7 not found: ID does not exist" Nov 29 08:00:51 crc kubenswrapper[4828]: I1129 08:00:51.423746 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" path="/var/lib/kubelet/pods/70ba5b89-0997-470b-90ec-a05a380c2b95/volumes" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.149189 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406721-tmr8c"] Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150295 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150315 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150344 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150352 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150365 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150375 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150393 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150400 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150412 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150420 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4828]: E1129 08:01:00.150437 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150443 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150695 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ba5b89-0997-470b-90ec-a05a380c2b95" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.150732 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705c903-8693-4892-a4c1-d50a086db042" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.151495 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.164534 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-tmr8c"] Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.237634 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.237721 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzfl\" (UniqueName: \"kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.237765 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.237867 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.339204 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.339338 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzfl\" (UniqueName: \"kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.339393 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.339480 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.346135 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.346333 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.346360 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.356978 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzfl\" (UniqueName: \"kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl\") pod \"keystone-cron-29406721-tmr8c\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.479058 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:00 crc kubenswrapper[4828]: I1129 08:01:00.936660 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-tmr8c"] Nov 29 08:01:01 crc kubenswrapper[4828]: I1129 08:01:01.361044 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-tmr8c" event={"ID":"84a419c7-486a-4b21-a023-c74395681e1d","Type":"ContainerStarted","Data":"cdde3c53475ae1ec560d6c47b0fd700896c2f4e5dd19ef984dc6491f9cb5d95c"} Nov 29 08:01:01 crc kubenswrapper[4828]: I1129 08:01:01.361456 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-tmr8c" event={"ID":"84a419c7-486a-4b21-a023-c74395681e1d","Type":"ContainerStarted","Data":"20bf54b31d38e020854de6bec7217089b306ab1fae66fc3259e1951da59de347"} Nov 29 08:01:01 crc kubenswrapper[4828]: I1129 08:01:01.376939 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406721-tmr8c" podStartSLOduration=1.376914591 podStartE2EDuration="1.376914591s" podCreationTimestamp="2025-11-29 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:01:01.373478294 +0000 UTC m=+3600.995554352" watchObservedRunningTime="2025-11-29 08:01:01.376914591 +0000 UTC m=+3600.998990649" Nov 29 08:01:05 crc kubenswrapper[4828]: I1129 08:01:05.409742 4828 generic.go:334] "Generic (PLEG): container finished" podID="84a419c7-486a-4b21-a023-c74395681e1d" containerID="cdde3c53475ae1ec560d6c47b0fd700896c2f4e5dd19ef984dc6491f9cb5d95c" exitCode=0 Nov 29 08:01:05 crc kubenswrapper[4828]: I1129 08:01:05.409958 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-tmr8c" event={"ID":"84a419c7-486a-4b21-a023-c74395681e1d","Type":"ContainerDied","Data":"cdde3c53475ae1ec560d6c47b0fd700896c2f4e5dd19ef984dc6491f9cb5d95c"} Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.926822 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.967781 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle\") pod \"84a419c7-486a-4b21-a023-c74395681e1d\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.967910 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzfl\" (UniqueName: \"kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl\") pod \"84a419c7-486a-4b21-a023-c74395681e1d\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.968049 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data\") pod \"84a419c7-486a-4b21-a023-c74395681e1d\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.968135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys\") pod \"84a419c7-486a-4b21-a023-c74395681e1d\" (UID: \"84a419c7-486a-4b21-a023-c74395681e1d\") " Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.976103 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "84a419c7-486a-4b21-a023-c74395681e1d" (UID: "84a419c7-486a-4b21-a023-c74395681e1d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:06 crc kubenswrapper[4828]: I1129 08:01:06.977882 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl" (OuterVolumeSpecName: "kube-api-access-rzzfl") pod "84a419c7-486a-4b21-a023-c74395681e1d" (UID: "84a419c7-486a-4b21-a023-c74395681e1d"). InnerVolumeSpecName "kube-api-access-rzzfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.017638 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a419c7-486a-4b21-a023-c74395681e1d" (UID: "84a419c7-486a-4b21-a023-c74395681e1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.024534 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data" (OuterVolumeSpecName: "config-data") pod "84a419c7-486a-4b21-a023-c74395681e1d" (UID: "84a419c7-486a-4b21-a023-c74395681e1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.071599 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.071642 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.071658 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzfl\" (UniqueName: \"kubernetes.io/projected/84a419c7-486a-4b21-a023-c74395681e1d-kube-api-access-rzzfl\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.071672 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a419c7-486a-4b21-a023-c74395681e1d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.429717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-tmr8c" event={"ID":"84a419c7-486a-4b21-a023-c74395681e1d","Type":"ContainerDied","Data":"20bf54b31d38e020854de6bec7217089b306ab1fae66fc3259e1951da59de347"} Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.430053 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bf54b31d38e020854de6bec7217089b306ab1fae66fc3259e1951da59de347" Nov 29 08:01:07 crc kubenswrapper[4828]: I1129 08:01:07.429796 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-tmr8c" Nov 29 08:01:41 crc kubenswrapper[4828]: I1129 08:01:41.486811 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:01:41 crc kubenswrapper[4828]: I1129 08:01:41.487427 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:11 crc kubenswrapper[4828]: I1129 08:02:11.487666 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:11 crc kubenswrapper[4828]: I1129 08:02:11.488280 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:41 crc kubenswrapper[4828]: I1129 08:02:41.487116 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:41 crc kubenswrapper[4828]: I1129 08:02:41.488375 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:41 crc kubenswrapper[4828]: I1129 08:02:41.488502 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 08:02:41 crc kubenswrapper[4828]: I1129 08:02:41.489927 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:02:41 crc kubenswrapper[4828]: I1129 08:02:41.490012 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" gracePeriod=600 Nov 29 08:02:41 crc kubenswrapper[4828]: E1129 08:02:41.610144 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:02:42 crc kubenswrapper[4828]: I1129 08:02:42.343165 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" exitCode=0 Nov 29 08:02:42 crc kubenswrapper[4828]: I1129 08:02:42.343424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969"} Nov 29 08:02:42 crc kubenswrapper[4828]: I1129 08:02:42.343486 4828 scope.go:117] "RemoveContainer" containerID="bca2ef8c8a7cefee98698adc1a998e44a3bf38ad04b26423bdd6d1a827da8d28" Nov 29 08:02:42 crc kubenswrapper[4828]: I1129 08:02:42.344322 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:02:42 crc kubenswrapper[4828]: E1129 08:02:42.344857 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:02:54 crc kubenswrapper[4828]: I1129 08:02:54.412578 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:02:54 crc kubenswrapper[4828]: E1129 08:02:54.413724 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:03:06 crc kubenswrapper[4828]: I1129 08:03:06.413735 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:03:06 crc kubenswrapper[4828]: E1129 08:03:06.414763 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:03:19 crc kubenswrapper[4828]: I1129 08:03:19.412102 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:03:19 crc kubenswrapper[4828]: E1129 08:03:19.413036 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:03:31 crc kubenswrapper[4828]: I1129 08:03:31.427969 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:03:31 crc kubenswrapper[4828]: E1129 08:03:31.429062 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:03:43 crc kubenswrapper[4828]: I1129 08:03:43.411678 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:03:43 crc kubenswrapper[4828]: E1129 08:03:43.412602 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:03:56 crc kubenswrapper[4828]: I1129 08:03:56.412107 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:03:56 crc kubenswrapper[4828]: E1129 08:03:56.412843 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:04:09 crc kubenswrapper[4828]: I1129 08:04:09.418915 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:04:09 crc kubenswrapper[4828]: E1129 08:04:09.419909 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:04:22 crc kubenswrapper[4828]: I1129 08:04:22.411893 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:04:22 crc kubenswrapper[4828]: E1129 08:04:22.412732 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:04:37 crc kubenswrapper[4828]: I1129 08:04:37.412141 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:04:37 crc kubenswrapper[4828]: E1129 08:04:37.412972 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:04:48 crc kubenswrapper[4828]: I1129 08:04:48.412635 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:04:48 crc kubenswrapper[4828]: E1129 08:04:48.413716 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:04:59 crc kubenswrapper[4828]: I1129 08:04:59.412529 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:04:59 crc kubenswrapper[4828]: E1129 08:04:59.413441 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:05:12 crc kubenswrapper[4828]: I1129 08:05:12.412221 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:05:12 crc kubenswrapper[4828]: E1129 08:05:12.413127 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:05:23 crc kubenswrapper[4828]: I1129 08:05:23.412139 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:05:23 crc kubenswrapper[4828]: E1129 08:05:23.412856 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:05:38 crc kubenswrapper[4828]: I1129 08:05:38.411908 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:05:38 crc kubenswrapper[4828]: E1129 08:05:38.412732 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:05:49 crc kubenswrapper[4828]: I1129 08:05:49.412498 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:05:49 crc kubenswrapper[4828]: E1129 08:05:49.413242 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:06:00 crc kubenswrapper[4828]: I1129 08:06:00.436073 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:06:00 crc kubenswrapper[4828]: E1129 08:06:00.436884 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:06:13 crc kubenswrapper[4828]: I1129 08:06:13.413289 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:06:13 crc kubenswrapper[4828]: E1129 08:06:13.414165 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:06:26 crc kubenswrapper[4828]: I1129 08:06:26.412451 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:06:26 crc kubenswrapper[4828]: E1129 08:06:26.413236 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:06:37 crc kubenswrapper[4828]: I1129 08:06:37.412404 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:06:37 crc kubenswrapper[4828]: E1129 08:06:37.413003 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:06:50 crc kubenswrapper[4828]: I1129 08:06:50.412255 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:06:50 crc kubenswrapper[4828]: E1129 08:06:50.413234 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:07:03 crc kubenswrapper[4828]: I1129 08:07:03.412441 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:07:03 crc kubenswrapper[4828]: E1129 08:07:03.413196 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:07:16 crc kubenswrapper[4828]: I1129 08:07:16.412452 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:07:16 crc kubenswrapper[4828]: E1129 08:07:16.413293 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:07:27 crc kubenswrapper[4828]: I1129 08:07:27.413162 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:07:27 crc kubenswrapper[4828]: E1129 08:07:27.413994 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:07:40 crc kubenswrapper[4828]: I1129 08:07:40.412082 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:07:40 crc kubenswrapper[4828]: E1129 08:07:40.412965 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:07:52 crc kubenswrapper[4828]: I1129 08:07:52.413348 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:07:53 crc kubenswrapper[4828]: I1129 08:07:53.437116 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d"} Nov 29 08:08:12 crc kubenswrapper[4828]: I1129 08:08:12.603672 4828 generic.go:334] "Generic (PLEG): container finished" podID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" containerID="cce125a4c8e28fcfcf32672d4bb6eeb76c07918f56ef2960f9931bee22a717dd" exitCode=0 Nov 29 08:08:12 crc kubenswrapper[4828]: I1129 08:08:12.603760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da","Type":"ContainerDied","Data":"cce125a4c8e28fcfcf32672d4bb6eeb76c07918f56ef2960f9931bee22a717dd"} Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.301391 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.449808 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.449910 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.449947 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.449966 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-542l9\" (UniqueName: \"kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450080 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450165 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450248 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450293 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key\") pod \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\" (UID: \"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da\") " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.450837 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.451197 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data" (OuterVolumeSpecName: "config-data") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.452116 4828 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.452171 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.456647 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9" (OuterVolumeSpecName: "kube-api-access-542l9") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "kube-api-access-542l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.458723 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.461400 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.482338 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.484551 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.491564 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.503331 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" (UID: "fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553732 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553767 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553775 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-542l9\" (UniqueName: \"kubernetes.io/projected/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-kube-api-access-542l9\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553786 4828 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553796 4828 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553806 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.553825 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.574577 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.625978 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da","Type":"ContainerDied","Data":"3045907357aabad916740a40f8c8e09d1a0d3f185d4ac3f42d73b0d75a7620ae"} Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.626022 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3045907357aabad916740a40f8c8e09d1a0d3f185d4ac3f42d73b0d75a7620ae" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.626093 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:08:14 crc kubenswrapper[4828]: I1129 08:08:14.655293 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.689008 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:18 crc kubenswrapper[4828]: E1129 08:08:18.689957 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.689971 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:18 crc kubenswrapper[4828]: E1129 08:08:18.689987 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a419c7-486a-4b21-a023-c74395681e1d" containerName="keystone-cron" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.689992 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a419c7-486a-4b21-a023-c74395681e1d" containerName="keystone-cron" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.690156 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.690179 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a419c7-486a-4b21-a023-c74395681e1d" containerName="keystone-cron" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.690866 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.692817 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7x6kc" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.698484 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.831888 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.831979 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q54wv\" (UniqueName: \"kubernetes.io/projected/9f67c663-1885-40e8-94c2-f35ac8e7a0f1-kube-api-access-q54wv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.933987 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.934134 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q54wv\" (UniqueName: \"kubernetes.io/projected/9f67c663-1885-40e8-94c2-f35ac8e7a0f1-kube-api-access-q54wv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.934972 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.959973 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q54wv\" (UniqueName: \"kubernetes.io/projected/9f67c663-1885-40e8-94c2-f35ac8e7a0f1-kube-api-access-q54wv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:18 crc kubenswrapper[4828]: I1129 08:08:18.963810 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f67c663-1885-40e8-94c2-f35ac8e7a0f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:19 crc kubenswrapper[4828]: I1129 08:08:19.017452 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:19 crc kubenswrapper[4828]: I1129 08:08:19.466966 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:08:19 crc kubenswrapper[4828]: I1129 08:08:19.472535 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:19 crc kubenswrapper[4828]: I1129 08:08:19.672639 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9f67c663-1885-40e8-94c2-f35ac8e7a0f1","Type":"ContainerStarted","Data":"c54f7fc9a79b7d9b5f7aba559100ad4a5b55363b4e4700c0190a81feeaa05464"} Nov 29 08:08:41 crc kubenswrapper[4828]: I1129 08:08:41.672884 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" podUID="ffaa931d-e049-475f-8a3a-95cdf41bf40f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 29 08:08:41 crc kubenswrapper[4828]: I1129 08:08:41.672914 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-c8bd5b56c-6wm6v" podUID="ffaa931d-e049-475f-8a3a-95cdf41bf40f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 29 08:09:39 crc kubenswrapper[4828]: I1129 08:09:39.422657 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9f67c663-1885-40e8-94c2-f35ac8e7a0f1","Type":"ContainerStarted","Data":"e1c0d3df806b7e51343215ea7ea80a6a4a5d603bddb171c20f37a4217ed7886d"} Nov 29 08:09:39 crc kubenswrapper[4828]: I1129 08:09:39.442843 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.239437643 podStartE2EDuration="1m21.442815854s" podCreationTimestamp="2025-11-29 08:08:18 +0000 UTC" firstStartedPulling="2025-11-29 08:08:19.46669396 +0000 UTC m=+4039.088770018" lastFinishedPulling="2025-11-29 08:09:38.670072181 +0000 UTC m=+4118.292148229" observedRunningTime="2025-11-29 08:09:39.439589322 +0000 UTC m=+4119.061665390" watchObservedRunningTime="2025-11-29 08:09:39.442815854 +0000 UTC m=+4119.064891912" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.338805 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bn45t/must-gather-dpq24"] Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.342952 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.348708 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bn45t"/"openshift-service-ca.crt" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.348846 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bn45t"/"default-dockercfg-fsjt8" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.349302 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bn45t"/"kube-root-ca.crt" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.352638 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bn45t/must-gather-dpq24"] Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.413119 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrlr\" (UniqueName: \"kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.413156 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.514031 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrlr\" (UniqueName: \"kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.514069 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.514526 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.533015 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrlr\" (UniqueName: \"kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr\") pod \"must-gather-dpq24\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:04 crc kubenswrapper[4828]: I1129 08:10:04.668020 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:05 crc kubenswrapper[4828]: I1129 08:10:05.116055 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bn45t/must-gather-dpq24"] Nov 29 08:10:05 crc kubenswrapper[4828]: I1129 08:10:05.644871 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bn45t/must-gather-dpq24" event={"ID":"4de50c1d-e1a7-4e6a-b278-820ba842ca11","Type":"ContainerStarted","Data":"e5576bf4bf9fa296b78a3a0d3c4321b1e75dcd966c899986317be41062fa7b77"} Nov 29 08:10:10 crc kubenswrapper[4828]: I1129 08:10:10.919691 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:10:10 crc kubenswrapper[4828]: I1129 08:10:10.922200 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:10 crc kubenswrapper[4828]: I1129 08:10:10.954127 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.062322 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.062390 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.062432 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljc2x\" (UniqueName: \"kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.164258 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.164339 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.164373 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljc2x\" (UniqueName: \"kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.164931 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.164999 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.486927 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.486994 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.657490 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljc2x\" (UniqueName: \"kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x\") pod \"redhat-operators-p7n4h\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:11 crc kubenswrapper[4828]: I1129 08:10:11.898022 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:10:12 crc kubenswrapper[4828]: I1129 08:10:12.476516 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:10:12 crc kubenswrapper[4828]: I1129 08:10:12.712881 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerStarted","Data":"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6"} Nov 29 08:10:12 crc kubenswrapper[4828]: I1129 08:10:12.713695 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerStarted","Data":"8b97f6ccf6444effe1de314246b6dde0d0f80d6222695351b681d9ba3f880da5"} Nov 29 08:10:13 crc kubenswrapper[4828]: I1129 08:10:13.724669 4828 generic.go:334] "Generic (PLEG): container finished" podID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerID="8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6" exitCode=0 Nov 29 08:10:13 crc kubenswrapper[4828]: I1129 08:10:13.724781 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerDied","Data":"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6"} Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.097475 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.108753 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.109211 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.126649 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk68r\" (UniqueName: \"kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.126926 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.128129 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.230441 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk68r\" (UniqueName: \"kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.230580 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.230623 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.231459 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.231551 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.253229 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk68r\" (UniqueName: \"kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r\") pod \"certified-operators-h75t5\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:14 crc kubenswrapper[4828]: I1129 08:10:14.434843 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:10:15 crc kubenswrapper[4828]: I1129 08:10:14.964570 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:10:15 crc kubenswrapper[4828]: I1129 08:10:15.743423 4828 generic.go:334] "Generic (PLEG): container finished" podID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerID="6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce" exitCode=0 Nov 29 08:10:15 crc kubenswrapper[4828]: I1129 08:10:15.743649 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerDied","Data":"6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce"} Nov 29 08:10:15 crc kubenswrapper[4828]: I1129 08:10:15.743736 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerStarted","Data":"e81dcafedb7819b1137006956b7272408621c0a334f8c019b28f12c3e400a98c"} Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.500808 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.506182 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.522974 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.523648 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp4n4\" (UniqueName: \"kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.523851 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.523929 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.625430 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.625494 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.625631 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp4n4\" (UniqueName: \"kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.626151 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.626196 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.661904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp4n4\" (UniqueName: \"kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4\") pod \"community-operators-5wksg\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:24 crc kubenswrapper[4828]: I1129 08:10:24.837394 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:25 crc kubenswrapper[4828]: I1129 08:10:25.191353 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:25 crc kubenswrapper[4828]: I1129 08:10:25.829451 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d31a504-6051-4680-916c-b7c18d348f5f" containerID="2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e" exitCode=0 Nov 29 08:10:25 crc kubenswrapper[4828]: I1129 08:10:25.829499 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerDied","Data":"2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e"} Nov 29 08:10:25 crc kubenswrapper[4828]: I1129 08:10:25.829528 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerStarted","Data":"83f37230da04955d35438eafde6561593e17b1cfe16d1864407acfcb675a088c"} Nov 29 08:10:39 crc kubenswrapper[4828]: E1129 08:10:39.932077 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/6a/6aec6bcdf3538aaa0768039006df1aa5ca70f4788921b2dc9f0647023bd59b56?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T081014Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=c7219ef834676f61d2cb6ce27f661514efd862be6dc5a7746ed3c7397ae2a5c2®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-must-gather&akamai_signature=exp=1764404714~hmac=1445c8e0d610fea515dbb1b64e79207440e7beeda696cd69d1a03510691511a8\": net/http: TLS handshake timeout" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Nov 29 08:10:39 crc kubenswrapper[4828]: E1129 08:10:39.932699 4828 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 29 08:10:39 crc kubenswrapper[4828]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c Nov 29 08:10:39 crc kubenswrapper[4828]: echo "[disk usage checker] Started" Nov 29 08:10:39 crc kubenswrapper[4828]: target_dir="/must-gather" Nov 29 08:10:39 crc kubenswrapper[4828]: usage_percentage_limit="70" Nov 29 08:10:39 crc kubenswrapper[4828]: while true; do Nov 29 08:10:39 crc kubenswrapper[4828]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Nov 29 08:10:39 crc kubenswrapper[4828]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Nov 29 08:10:39 crc kubenswrapper[4828]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Nov 29 08:10:39 crc kubenswrapper[4828]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Nov 29 08:10:39 crc kubenswrapper[4828]: ps -o sess --no-headers | sort -u | while read sid; do Nov 29 08:10:39 crc kubenswrapper[4828]: [[ "$sid" -eq "${$}" ]] && continue Nov 29 08:10:39 crc kubenswrapper[4828]: pkill --signal SIGKILL --session "$sid" Nov 29 08:10:39 crc kubenswrapper[4828]: done Nov 29 08:10:39 crc kubenswrapper[4828]: exit 1 Nov 29 08:10:39 crc kubenswrapper[4828]: fi Nov 29 08:10:39 crc kubenswrapper[4828]: sleep 5 Nov 29 08:10:39 crc kubenswrapper[4828]: done & setsid -w bash <<-MUSTGATHER_EOF Nov 29 08:10:39 crc kubenswrapper[4828]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all SOS_DECOMPRESS=0 gather Nov 29 08:10:39 crc kubenswrapper[4828]: MUSTGATHER_EOF Nov 29 08:10:39 crc kubenswrapper[4828]: sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtrlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-dpq24_openshift-must-gather-bn45t(4de50c1d-e1a7-4e6a-b278-820ba842ca11): ErrImagePull: parsing image configuration: Get "https://cdn01.quay.io/quayio-production-s3/sha256/6a/6aec6bcdf3538aaa0768039006df1aa5ca70f4788921b2dc9f0647023bd59b56?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T081014Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=c7219ef834676f61d2cb6ce27f661514efd862be6dc5a7746ed3c7397ae2a5c2®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-must-gather&akamai_signature=exp=1764404714~hmac=1445c8e0d610fea515dbb1b64e79207440e7beeda696cd69d1a03510691511a8": net/http: TLS handshake timeout Nov 29 08:10:39 crc kubenswrapper[4828]: > logger="UnhandledError" Nov 29 08:10:39 crc kubenswrapper[4828]: E1129 08:10:39.935402 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/6a/6aec6bcdf3538aaa0768039006df1aa5ca70f4788921b2dc9f0647023bd59b56?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T081014Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=c7219ef834676f61d2cb6ce27f661514efd862be6dc5a7746ed3c7397ae2a5c2®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-must-gather&akamai_signature=exp=1764404714~hmac=1445c8e0d610fea515dbb1b64e79207440e7beeda696cd69d1a03510691511a8\\\": net/http: TLS handshake timeout\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-bn45t/must-gather-dpq24" podUID="4de50c1d-e1a7-4e6a-b278-820ba842ca11" Nov 29 08:10:39 crc kubenswrapper[4828]: E1129 08:10:39.986214 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-bn45t/must-gather-dpq24" podUID="4de50c1d-e1a7-4e6a-b278-820ba842ca11" Nov 29 08:10:41 crc kubenswrapper[4828]: I1129 08:10:41.487187 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:10:41 crc kubenswrapper[4828]: I1129 08:10:41.488398 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.072578 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d31a504-6051-4680-916c-b7c18d348f5f" containerID="5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f" exitCode=0 Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.072660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerDied","Data":"5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f"} Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.397236 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bn45t/must-gather-dpq24"] Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.406510 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bn45t/must-gather-dpq24"] Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.733978 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.807801 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrlr\" (UniqueName: \"kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr\") pod \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.807918 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output\") pod \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\" (UID: \"4de50c1d-e1a7-4e6a-b278-820ba842ca11\") " Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.808407 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4de50c1d-e1a7-4e6a-b278-820ba842ca11" (UID: "4de50c1d-e1a7-4e6a-b278-820ba842ca11"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.815490 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr" (OuterVolumeSpecName: "kube-api-access-wtrlr") pod "4de50c1d-e1a7-4e6a-b278-820ba842ca11" (UID: "4de50c1d-e1a7-4e6a-b278-820ba842ca11"). InnerVolumeSpecName "kube-api-access-wtrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.910153 4828 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4de50c1d-e1a7-4e6a-b278-820ba842ca11-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:49 crc kubenswrapper[4828]: I1129 08:10:49.910199 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrlr\" (UniqueName: \"kubernetes.io/projected/4de50c1d-e1a7-4e6a-b278-820ba842ca11-kube-api-access-wtrlr\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:50 crc kubenswrapper[4828]: I1129 08:10:50.084616 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bn45t/must-gather-dpq24" Nov 29 08:10:51 crc kubenswrapper[4828]: I1129 08:10:51.097927 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerStarted","Data":"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e"} Nov 29 08:10:51 crc kubenswrapper[4828]: I1129 08:10:51.122673 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wksg" podStartSLOduration=2.471308143 podStartE2EDuration="27.122656643s" podCreationTimestamp="2025-11-29 08:10:24 +0000 UTC" firstStartedPulling="2025-11-29 08:10:25.831829942 +0000 UTC m=+4165.453906000" lastFinishedPulling="2025-11-29 08:10:50.483178442 +0000 UTC m=+4190.105254500" observedRunningTime="2025-11-29 08:10:51.117129743 +0000 UTC m=+4190.739205801" watchObservedRunningTime="2025-11-29 08:10:51.122656643 +0000 UTC m=+4190.744732701" Nov 29 08:10:51 crc kubenswrapper[4828]: I1129 08:10:51.433485 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de50c1d-e1a7-4e6a-b278-820ba842ca11" path="/var/lib/kubelet/pods/4de50c1d-e1a7-4e6a-b278-820ba842ca11/volumes" Nov 29 08:10:54 crc kubenswrapper[4828]: I1129 08:10:54.838596 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:54 crc kubenswrapper[4828]: I1129 08:10:54.839154 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:54 crc kubenswrapper[4828]: I1129 08:10:54.894319 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:55 crc kubenswrapper[4828]: I1129 08:10:55.186848 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:56 crc kubenswrapper[4828]: I1129 08:10:56.400463 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.164226 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wksg" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="registry-server" containerID="cri-o://f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e" gracePeriod=2 Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.720402 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.867125 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content\") pod \"4d31a504-6051-4680-916c-b7c18d348f5f\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.867757 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities\") pod \"4d31a504-6051-4680-916c-b7c18d348f5f\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.867826 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp4n4\" (UniqueName: \"kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4\") pod \"4d31a504-6051-4680-916c-b7c18d348f5f\" (UID: \"4d31a504-6051-4680-916c-b7c18d348f5f\") " Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.868574 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities" (OuterVolumeSpecName: "utilities") pod "4d31a504-6051-4680-916c-b7c18d348f5f" (UID: "4d31a504-6051-4680-916c-b7c18d348f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.873361 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4" (OuterVolumeSpecName: "kube-api-access-xp4n4") pod "4d31a504-6051-4680-916c-b7c18d348f5f" (UID: "4d31a504-6051-4680-916c-b7c18d348f5f"). InnerVolumeSpecName "kube-api-access-xp4n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.922451 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d31a504-6051-4680-916c-b7c18d348f5f" (UID: "4d31a504-6051-4680-916c-b7c18d348f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.970500 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.970533 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d31a504-6051-4680-916c-b7c18d348f5f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:57 crc kubenswrapper[4828]: I1129 08:10:57.970543 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp4n4\" (UniqueName: \"kubernetes.io/projected/4d31a504-6051-4680-916c-b7c18d348f5f-kube-api-access-xp4n4\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.197915 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d31a504-6051-4680-916c-b7c18d348f5f" containerID="f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e" exitCode=0 Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.198003 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerDied","Data":"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e"} Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.198056 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wksg" event={"ID":"4d31a504-6051-4680-916c-b7c18d348f5f","Type":"ContainerDied","Data":"83f37230da04955d35438eafde6561593e17b1cfe16d1864407acfcb675a088c"} Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.198083 4828 scope.go:117] "RemoveContainer" containerID="f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.198132 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wksg" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.221505 4828 scope.go:117] "RemoveContainer" containerID="5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.240763 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.246511 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wksg"] Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.247502 4828 scope.go:117] "RemoveContainer" containerID="2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.291260 4828 scope.go:117] "RemoveContainer" containerID="f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e" Nov 29 08:10:58 crc kubenswrapper[4828]: E1129 08:10:58.291696 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e\": container with ID starting with f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e not found: ID does not exist" containerID="f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.291732 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e"} err="failed to get container status \"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e\": rpc error: code = NotFound desc = could not find container \"f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e\": container with ID starting with f22f535cce2d951b14ea69836bc2b8c5b27e19029e59a685533b2421178c061e not found: ID does not exist" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.291758 4828 scope.go:117] "RemoveContainer" containerID="5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f" Nov 29 08:10:58 crc kubenswrapper[4828]: E1129 08:10:58.292094 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f\": container with ID starting with 5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f not found: ID does not exist" containerID="5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.292120 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f"} err="failed to get container status \"5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f\": rpc error: code = NotFound desc = could not find container \"5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f\": container with ID starting with 5e06a41d533eb578e32f98d93dd546caf1f67abc2e579350e272e24e7509492f not found: ID does not exist" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.292138 4828 scope.go:117] "RemoveContainer" containerID="2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e" Nov 29 08:10:58 crc kubenswrapper[4828]: E1129 08:10:58.292410 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e\": container with ID starting with 2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e not found: ID does not exist" containerID="2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e" Nov 29 08:10:58 crc kubenswrapper[4828]: I1129 08:10:58.292436 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e"} err="failed to get container status \"2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e\": rpc error: code = NotFound desc = could not find container \"2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e\": container with ID starting with 2d2b7ce87d76909ed83bcc74ec6685b5d95b1a41b1a5961a4edf506da4ec932e not found: ID does not exist" Nov 29 08:10:59 crc kubenswrapper[4828]: I1129 08:10:59.207477 4828 generic.go:334] "Generic (PLEG): container finished" podID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerID="ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c" exitCode=0 Nov 29 08:10:59 crc kubenswrapper[4828]: I1129 08:10:59.207551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerDied","Data":"ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c"} Nov 29 08:10:59 crc kubenswrapper[4828]: I1129 08:10:59.422055 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" path="/var/lib/kubelet/pods/4d31a504-6051-4680-916c-b7c18d348f5f/volumes" Nov 29 08:11:01 crc kubenswrapper[4828]: I1129 08:11:01.238712 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerStarted","Data":"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4"} Nov 29 08:11:01 crc kubenswrapper[4828]: I1129 08:11:01.268402 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7n4h" podStartSLOduration=3.831588735 podStartE2EDuration="51.268379206s" podCreationTimestamp="2025-11-29 08:10:10 +0000 UTC" firstStartedPulling="2025-11-29 08:10:12.716357866 +0000 UTC m=+4152.338433924" lastFinishedPulling="2025-11-29 08:11:00.153148307 +0000 UTC m=+4199.775224395" observedRunningTime="2025-11-29 08:11:01.263305427 +0000 UTC m=+4200.885381475" watchObservedRunningTime="2025-11-29 08:11:01.268379206 +0000 UTC m=+4200.890455264" Nov 29 08:11:01 crc kubenswrapper[4828]: I1129 08:11:01.898542 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:01 crc kubenswrapper[4828]: I1129 08:11:01.899082 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:02 crc kubenswrapper[4828]: I1129 08:11:02.944838 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p7n4h" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="registry-server" probeResult="failure" output=< Nov 29 08:11:02 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 08:11:02 crc kubenswrapper[4828]: > Nov 29 08:11:05 crc kubenswrapper[4828]: I1129 08:11:05.291405 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerStarted","Data":"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57"} Nov 29 08:11:06 crc kubenswrapper[4828]: I1129 08:11:06.302960 4828 generic.go:334] "Generic (PLEG): container finished" podID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerID="ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57" exitCode=0 Nov 29 08:11:06 crc kubenswrapper[4828]: I1129 08:11:06.302998 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerDied","Data":"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57"} Nov 29 08:11:07 crc kubenswrapper[4828]: I1129 08:11:07.314807 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerStarted","Data":"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187"} Nov 29 08:11:07 crc kubenswrapper[4828]: I1129 08:11:07.335245 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h75t5" podStartSLOduration=2.276304822 podStartE2EDuration="53.335222002s" podCreationTimestamp="2025-11-29 08:10:14 +0000 UTC" firstStartedPulling="2025-11-29 08:10:15.745663305 +0000 UTC m=+4155.367739363" lastFinishedPulling="2025-11-29 08:11:06.804580485 +0000 UTC m=+4206.426656543" observedRunningTime="2025-11-29 08:11:07.330195685 +0000 UTC m=+4206.952271743" watchObservedRunningTime="2025-11-29 08:11:07.335222002 +0000 UTC m=+4206.957298060" Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.486954 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.488679 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.488795 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.489630 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.489758 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d" gracePeriod=600 Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.946228 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:11 crc kubenswrapper[4828]: I1129 08:11:11.997582 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:12 crc kubenswrapper[4828]: I1129 08:11:12.379910 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d" exitCode=0 Nov 29 08:11:12 crc kubenswrapper[4828]: I1129 08:11:12.379942 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d"} Nov 29 08:11:12 crc kubenswrapper[4828]: I1129 08:11:12.379977 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440"} Nov 29 08:11:12 crc kubenswrapper[4828]: I1129 08:11:12.379994 4828 scope.go:117] "RemoveContainer" containerID="f84df01a721adf76678265a84d3c94b2581174ed9dabed19ab4d05592f59e969" Nov 29 08:11:12 crc kubenswrapper[4828]: I1129 08:11:12.773571 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:11:13 crc kubenswrapper[4828]: I1129 08:11:13.391886 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7n4h" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="registry-server" containerID="cri-o://2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4" gracePeriod=2 Nov 29 08:11:13 crc kubenswrapper[4828]: I1129 08:11:13.952074 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.097829 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljc2x\" (UniqueName: \"kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x\") pod \"d5986c62-be2f-466b-ae18-eb064d27c27f\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.098170 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities\") pod \"d5986c62-be2f-466b-ae18-eb064d27c27f\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.098214 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content\") pod \"d5986c62-be2f-466b-ae18-eb064d27c27f\" (UID: \"d5986c62-be2f-466b-ae18-eb064d27c27f\") " Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.099154 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities" (OuterVolumeSpecName: "utilities") pod "d5986c62-be2f-466b-ae18-eb064d27c27f" (UID: "d5986c62-be2f-466b-ae18-eb064d27c27f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.103518 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x" (OuterVolumeSpecName: "kube-api-access-ljc2x") pod "d5986c62-be2f-466b-ae18-eb064d27c27f" (UID: "d5986c62-be2f-466b-ae18-eb064d27c27f"). InnerVolumeSpecName "kube-api-access-ljc2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.200877 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljc2x\" (UniqueName: \"kubernetes.io/projected/d5986c62-be2f-466b-ae18-eb064d27c27f-kube-api-access-ljc2x\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.200918 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.204165 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5986c62-be2f-466b-ae18-eb064d27c27f" (UID: "d5986c62-be2f-466b-ae18-eb064d27c27f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.302252 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5986c62-be2f-466b-ae18-eb064d27c27f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.402484 4828 generic.go:334] "Generic (PLEG): container finished" podID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerID="2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4" exitCode=0 Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.402531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerDied","Data":"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4"} Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.402565 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7n4h" event={"ID":"d5986c62-be2f-466b-ae18-eb064d27c27f","Type":"ContainerDied","Data":"8b97f6ccf6444effe1de314246b6dde0d0f80d6222695351b681d9ba3f880da5"} Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.402585 4828 scope.go:117] "RemoveContainer" containerID="2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.402598 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7n4h" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.427370 4828 scope.go:117] "RemoveContainer" containerID="ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.438558 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.438606 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.451388 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.461844 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7n4h"] Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.462355 4828 scope.go:117] "RemoveContainer" containerID="8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.494801 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.505581 4828 scope.go:117] "RemoveContainer" containerID="2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4" Nov 29 08:11:14 crc kubenswrapper[4828]: E1129 08:11:14.506166 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4\": container with ID starting with 2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4 not found: ID does not exist" containerID="2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.506227 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4"} err="failed to get container status \"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4\": rpc error: code = NotFound desc = could not find container \"2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4\": container with ID starting with 2545a153d32aa82429c0fd0b6d2a36909790b71555c92d04871ff24a48c433f4 not found: ID does not exist" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.506310 4828 scope.go:117] "RemoveContainer" containerID="ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c" Nov 29 08:11:14 crc kubenswrapper[4828]: E1129 08:11:14.507150 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c\": container with ID starting with ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c not found: ID does not exist" containerID="ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.507189 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c"} err="failed to get container status \"ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c\": rpc error: code = NotFound desc = could not find container \"ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c\": container with ID starting with ab035ac3c1d009234b12dc0b6e76a017a1d61c8e4c7d67cd22f1e5707e9a0d8c not found: ID does not exist" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.507234 4828 scope.go:117] "RemoveContainer" containerID="8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6" Nov 29 08:11:14 crc kubenswrapper[4828]: E1129 08:11:14.507663 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6\": container with ID starting with 8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6 not found: ID does not exist" containerID="8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6" Nov 29 08:11:14 crc kubenswrapper[4828]: I1129 08:11:14.507684 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6"} err="failed to get container status \"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6\": rpc error: code = NotFound desc = could not find container \"8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6\": container with ID starting with 8067fb6aaac1bfc7aabfa3cc01021e685ffc71ed467d894c327c2fa7e7b919d6 not found: ID does not exist" Nov 29 08:11:15 crc kubenswrapper[4828]: I1129 08:11:15.426587 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" path="/var/lib/kubelet/pods/d5986c62-be2f-466b-ae18-eb064d27c27f/volumes" Nov 29 08:11:15 crc kubenswrapper[4828]: I1129 08:11:15.469993 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:16 crc kubenswrapper[4828]: I1129 08:11:16.772948 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:11:17 crc kubenswrapper[4828]: I1129 08:11:17.434940 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h75t5" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="registry-server" containerID="cri-o://bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187" gracePeriod=2 Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.078936 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.131663 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content\") pod \"07f8361b-bb66-4ea0-a17d-72a68a3defed\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.131824 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities\") pod \"07f8361b-bb66-4ea0-a17d-72a68a3defed\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.131885 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk68r\" (UniqueName: \"kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r\") pod \"07f8361b-bb66-4ea0-a17d-72a68a3defed\" (UID: \"07f8361b-bb66-4ea0-a17d-72a68a3defed\") " Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.132877 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities" (OuterVolumeSpecName: "utilities") pod "07f8361b-bb66-4ea0-a17d-72a68a3defed" (UID: "07f8361b-bb66-4ea0-a17d-72a68a3defed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.142720 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r" (OuterVolumeSpecName: "kube-api-access-vk68r") pod "07f8361b-bb66-4ea0-a17d-72a68a3defed" (UID: "07f8361b-bb66-4ea0-a17d-72a68a3defed"). InnerVolumeSpecName "kube-api-access-vk68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.192909 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f8361b-bb66-4ea0-a17d-72a68a3defed" (UID: "07f8361b-bb66-4ea0-a17d-72a68a3defed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.233233 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.233285 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8361b-bb66-4ea0-a17d-72a68a3defed-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.233295 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk68r\" (UniqueName: \"kubernetes.io/projected/07f8361b-bb66-4ea0-a17d-72a68a3defed-kube-api-access-vk68r\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.446758 4828 generic.go:334] "Generic (PLEG): container finished" podID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerID="bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187" exitCode=0 Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.446805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerDied","Data":"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187"} Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.446827 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h75t5" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.446844 4828 scope.go:117] "RemoveContainer" containerID="bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.446832 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h75t5" event={"ID":"07f8361b-bb66-4ea0-a17d-72a68a3defed","Type":"ContainerDied","Data":"e81dcafedb7819b1137006956b7272408621c0a334f8c019b28f12c3e400a98c"} Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.474393 4828 scope.go:117] "RemoveContainer" containerID="ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57" Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.485738 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:11:18 crc kubenswrapper[4828]: I1129 08:11:18.493747 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h75t5"] Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.000532 4828 scope.go:117] "RemoveContainer" containerID="6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.040177 4828 scope.go:117] "RemoveContainer" containerID="bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187" Nov 29 08:11:19 crc kubenswrapper[4828]: E1129 08:11:19.040906 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187\": container with ID starting with bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187 not found: ID does not exist" containerID="bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.040967 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187"} err="failed to get container status \"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187\": rpc error: code = NotFound desc = could not find container \"bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187\": container with ID starting with bae681aaddc1b4097903c9b73e75484a928c74792318c1ca26c61fd6da873187 not found: ID does not exist" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.041007 4828 scope.go:117] "RemoveContainer" containerID="ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57" Nov 29 08:11:19 crc kubenswrapper[4828]: E1129 08:11:19.041341 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57\": container with ID starting with ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57 not found: ID does not exist" containerID="ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.041376 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57"} err="failed to get container status \"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57\": rpc error: code = NotFound desc = could not find container \"ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57\": container with ID starting with ced424a225ca3ffcfadb918953cbd312e9fcdb8d1f6d6c0164a2df7fdffd8d57 not found: ID does not exist" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.041396 4828 scope.go:117] "RemoveContainer" containerID="6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce" Nov 29 08:11:19 crc kubenswrapper[4828]: E1129 08:11:19.041734 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce\": container with ID starting with 6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce not found: ID does not exist" containerID="6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.041759 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce"} err="failed to get container status \"6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce\": rpc error: code = NotFound desc = could not find container \"6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce\": container with ID starting with 6020134b9fa22a6c23d6b8afd9bb7f3126feb32f67634f211d14634013489cce not found: ID does not exist" Nov 29 08:11:19 crc kubenswrapper[4828]: I1129 08:11:19.424195 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" path="/var/lib/kubelet/pods/07f8361b-bb66-4ea0-a17d-72a68a3defed/volumes" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.943097 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944090 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944105 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944121 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944129 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944141 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944147 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944162 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944169 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944191 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944197 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944215 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944220 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944226 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944232 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944240 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944245 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="extract-content" Nov 29 08:11:28 crc kubenswrapper[4828]: E1129 08:11:28.944255 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944263 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="extract-utilities" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944507 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d31a504-6051-4680-916c-b7c18d348f5f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944524 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5986c62-be2f-466b-ae18-eb064d27c27f" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.944541 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f8361b-bb66-4ea0-a17d-72a68a3defed" containerName="registry-server" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.947751 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:28 crc kubenswrapper[4828]: I1129 08:11:28.956383 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.141880 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.142047 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.142120 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76nb\" (UniqueName: \"kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.244322 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.244450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.244513 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76nb\" (UniqueName: \"kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.245132 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.245137 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.275300 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76nb\" (UniqueName: \"kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb\") pod \"redhat-marketplace-s9qh9\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:29 crc kubenswrapper[4828]: I1129 08:11:29.567746 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:30 crc kubenswrapper[4828]: I1129 08:11:30.047533 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:30 crc kubenswrapper[4828]: W1129 08:11:30.064940 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b46aaf_5858_4244_97dd_58e238dca046.slice/crio-bb62d99a9e5c765a0892673c9ef7b9128c4783008bcca57c887cd15134fd5cd0 WatchSource:0}: Error finding container bb62d99a9e5c765a0892673c9ef7b9128c4783008bcca57c887cd15134fd5cd0: Status 404 returned error can't find the container with id bb62d99a9e5c765a0892673c9ef7b9128c4783008bcca57c887cd15134fd5cd0 Nov 29 08:11:30 crc kubenswrapper[4828]: I1129 08:11:30.557869 4828 generic.go:334] "Generic (PLEG): container finished" podID="f0b46aaf-5858-4244-97dd-58e238dca046" containerID="cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09" exitCode=0 Nov 29 08:11:30 crc kubenswrapper[4828]: I1129 08:11:30.557971 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerDied","Data":"cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09"} Nov 29 08:11:30 crc kubenswrapper[4828]: I1129 08:11:30.558175 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerStarted","Data":"bb62d99a9e5c765a0892673c9ef7b9128c4783008bcca57c887cd15134fd5cd0"} Nov 29 08:11:32 crc kubenswrapper[4828]: I1129 08:11:32.578334 4828 generic.go:334] "Generic (PLEG): container finished" podID="f0b46aaf-5858-4244-97dd-58e238dca046" containerID="e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6" exitCode=0 Nov 29 08:11:32 crc kubenswrapper[4828]: I1129 08:11:32.578812 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerDied","Data":"e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6"} Nov 29 08:11:33 crc kubenswrapper[4828]: I1129 08:11:33.590124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerStarted","Data":"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854"} Nov 29 08:11:33 crc kubenswrapper[4828]: I1129 08:11:33.616389 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s9qh9" podStartSLOduration=3.043770709 podStartE2EDuration="5.616372163s" podCreationTimestamp="2025-11-29 08:11:28 +0000 UTC" firstStartedPulling="2025-11-29 08:11:30.559729502 +0000 UTC m=+4230.181805560" lastFinishedPulling="2025-11-29 08:11:33.132330966 +0000 UTC m=+4232.754407014" observedRunningTime="2025-11-29 08:11:33.611638123 +0000 UTC m=+4233.233714191" watchObservedRunningTime="2025-11-29 08:11:33.616372163 +0000 UTC m=+4233.238448221" Nov 29 08:11:39 crc kubenswrapper[4828]: I1129 08:11:39.568457 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:39 crc kubenswrapper[4828]: I1129 08:11:39.569032 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:40 crc kubenswrapper[4828]: I1129 08:11:40.415372 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:40 crc kubenswrapper[4828]: I1129 08:11:40.482986 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:40 crc kubenswrapper[4828]: I1129 08:11:40.659858 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.531340 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pz8wj/must-gather-m7x5t"] Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.533220 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.537437 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pz8wj"/"kube-root-ca.crt" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.537669 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pz8wj"/"openshift-service-ca.crt" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.537811 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pz8wj"/"default-dockercfg-lsbt7" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.540623 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pz8wj/must-gather-m7x5t"] Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.579474 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c522r\" (UniqueName: \"kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.579670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.656796 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s9qh9" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="registry-server" containerID="cri-o://496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854" gracePeriod=2 Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.682235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.682708 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c522r\" (UniqueName: \"kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.682838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.711722 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c522r\" (UniqueName: \"kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r\") pod \"must-gather-m7x5t\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:41 crc kubenswrapper[4828]: I1129 08:11:41.858330 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.205600 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.304467 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content\") pod \"f0b46aaf-5858-4244-97dd-58e238dca046\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.304559 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k76nb\" (UniqueName: \"kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb\") pod \"f0b46aaf-5858-4244-97dd-58e238dca046\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.304602 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities\") pod \"f0b46aaf-5858-4244-97dd-58e238dca046\" (UID: \"f0b46aaf-5858-4244-97dd-58e238dca046\") " Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.305863 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities" (OuterVolumeSpecName: "utilities") pod "f0b46aaf-5858-4244-97dd-58e238dca046" (UID: "f0b46aaf-5858-4244-97dd-58e238dca046"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.315645 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb" (OuterVolumeSpecName: "kube-api-access-k76nb") pod "f0b46aaf-5858-4244-97dd-58e238dca046" (UID: "f0b46aaf-5858-4244-97dd-58e238dca046"). InnerVolumeSpecName "kube-api-access-k76nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.326595 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0b46aaf-5858-4244-97dd-58e238dca046" (UID: "f0b46aaf-5858-4244-97dd-58e238dca046"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.406945 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.407464 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k76nb\" (UniqueName: \"kubernetes.io/projected/f0b46aaf-5858-4244-97dd-58e238dca046-kube-api-access-k76nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.407481 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0b46aaf-5858-4244-97dd-58e238dca046-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.442631 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pz8wj/must-gather-m7x5t"] Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.665618 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" event={"ID":"3f0b8db8-c2d6-41c8-bf9d-904788239b26","Type":"ContainerStarted","Data":"afbb2086eceb7db9ac5387813a48dbfe867b3a27bff8b1e8591531c95b935f4a"} Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.667599 4828 generic.go:334] "Generic (PLEG): container finished" podID="f0b46aaf-5858-4244-97dd-58e238dca046" containerID="496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854" exitCode=0 Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.667644 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerDied","Data":"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854"} Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.667675 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9qh9" event={"ID":"f0b46aaf-5858-4244-97dd-58e238dca046","Type":"ContainerDied","Data":"bb62d99a9e5c765a0892673c9ef7b9128c4783008bcca57c887cd15134fd5cd0"} Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.667679 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9qh9" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.667692 4828 scope.go:117] "RemoveContainer" containerID="496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.693442 4828 scope.go:117] "RemoveContainer" containerID="e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.706818 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.715784 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9qh9"] Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.717896 4828 scope.go:117] "RemoveContainer" containerID="cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.801614 4828 scope.go:117] "RemoveContainer" containerID="496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854" Nov 29 08:11:42 crc kubenswrapper[4828]: E1129 08:11:42.802059 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854\": container with ID starting with 496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854 not found: ID does not exist" containerID="496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.802149 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854"} err="failed to get container status \"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854\": rpc error: code = NotFound desc = could not find container \"496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854\": container with ID starting with 496a4bdaee1694acbb06ce9f563a239da669e5828e9a5e83dbac2b14b88de854 not found: ID does not exist" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.802217 4828 scope.go:117] "RemoveContainer" containerID="e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6" Nov 29 08:11:42 crc kubenswrapper[4828]: E1129 08:11:42.802578 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6\": container with ID starting with e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6 not found: ID does not exist" containerID="e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.802745 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6"} err="failed to get container status \"e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6\": rpc error: code = NotFound desc = could not find container \"e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6\": container with ID starting with e0da2d350a08899e365c480e27c3736fce10a1a7a1ba69987b290ff754549ad6 not found: ID does not exist" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.802784 4828 scope.go:117] "RemoveContainer" containerID="cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09" Nov 29 08:11:42 crc kubenswrapper[4828]: E1129 08:11:42.803163 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09\": container with ID starting with cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09 not found: ID does not exist" containerID="cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09" Nov 29 08:11:42 crc kubenswrapper[4828]: I1129 08:11:42.803295 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09"} err="failed to get container status \"cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09\": rpc error: code = NotFound desc = could not find container \"cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09\": container with ID starting with cb016b72d976770e33985dcb7f6d35d160cf86bb1845e641a238fa5a4a771c09 not found: ID does not exist" Nov 29 08:11:43 crc kubenswrapper[4828]: I1129 08:11:43.428508 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" path="/var/lib/kubelet/pods/f0b46aaf-5858-4244-97dd-58e238dca046/volumes" Nov 29 08:11:46 crc kubenswrapper[4828]: I1129 08:11:46.712505 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" event={"ID":"3f0b8db8-c2d6-41c8-bf9d-904788239b26","Type":"ContainerStarted","Data":"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4"} Nov 29 08:11:47 crc kubenswrapper[4828]: I1129 08:11:47.722913 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" event={"ID":"3f0b8db8-c2d6-41c8-bf9d-904788239b26","Type":"ContainerStarted","Data":"e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b"} Nov 29 08:11:47 crc kubenswrapper[4828]: I1129 08:11:47.743153 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" podStartSLOduration=2.86460688 podStartE2EDuration="6.743134036s" podCreationTimestamp="2025-11-29 08:11:41 +0000 UTC" firstStartedPulling="2025-11-29 08:11:42.448030297 +0000 UTC m=+4242.070106355" lastFinishedPulling="2025-11-29 08:11:46.326557453 +0000 UTC m=+4245.948633511" observedRunningTime="2025-11-29 08:11:47.738560531 +0000 UTC m=+4247.360636589" watchObservedRunningTime="2025-11-29 08:11:47.743134036 +0000 UTC m=+4247.365210094" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.195182 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-2qtrb"] Nov 29 08:11:50 crc kubenswrapper[4828]: E1129 08:11:50.196682 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="extract-utilities" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.196706 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="extract-utilities" Nov 29 08:11:50 crc kubenswrapper[4828]: E1129 08:11:50.196745 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="extract-content" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.196753 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="extract-content" Nov 29 08:11:50 crc kubenswrapper[4828]: E1129 08:11:50.196777 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="registry-server" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.196785 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="registry-server" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.197363 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b46aaf-5858-4244-97dd-58e238dca046" containerName="registry-server" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.198520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.383055 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.383213 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvgsq\" (UniqueName: \"kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.484663 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.484748 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvgsq\" (UniqueName: \"kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.485055 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.506383 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvgsq\" (UniqueName: \"kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq\") pod \"crc-debug-2qtrb\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.529399 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:11:50 crc kubenswrapper[4828]: W1129 08:11:50.567406 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b21f980_cdb7_4eab_a1b6_4b402ed5b951.slice/crio-5eb80d5ea467ff2e113a814bd762e899e681d2fa6ac064c3cd522d6eebc516f4 WatchSource:0}: Error finding container 5eb80d5ea467ff2e113a814bd762e899e681d2fa6ac064c3cd522d6eebc516f4: Status 404 returned error can't find the container with id 5eb80d5ea467ff2e113a814bd762e899e681d2fa6ac064c3cd522d6eebc516f4 Nov 29 08:11:50 crc kubenswrapper[4828]: I1129 08:11:50.746957 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" event={"ID":"2b21f980-cdb7-4eab-a1b6-4b402ed5b951","Type":"ContainerStarted","Data":"5eb80d5ea467ff2e113a814bd762e899e681d2fa6ac064c3cd522d6eebc516f4"} Nov 29 08:12:01 crc kubenswrapper[4828]: I1129 08:12:01.851052 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" event={"ID":"2b21f980-cdb7-4eab-a1b6-4b402ed5b951","Type":"ContainerStarted","Data":"c64b0b51c720b9cc1b3525be4fafec604ebbe4f6b3a09831934f737a9e2960cb"} Nov 29 08:12:01 crc kubenswrapper[4828]: I1129 08:12:01.871044 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" podStartSLOduration=1.244173982 podStartE2EDuration="11.871024437s" podCreationTimestamp="2025-11-29 08:11:50 +0000 UTC" firstStartedPulling="2025-11-29 08:11:50.572706292 +0000 UTC m=+4250.194782350" lastFinishedPulling="2025-11-29 08:12:01.199556747 +0000 UTC m=+4260.821632805" observedRunningTime="2025-11-29 08:12:01.867002216 +0000 UTC m=+4261.489078274" watchObservedRunningTime="2025-11-29 08:12:01.871024437 +0000 UTC m=+4261.493100496" Nov 29 08:12:45 crc kubenswrapper[4828]: I1129 08:12:45.272400 4828 generic.go:334] "Generic (PLEG): container finished" podID="2b21f980-cdb7-4eab-a1b6-4b402ed5b951" containerID="c64b0b51c720b9cc1b3525be4fafec604ebbe4f6b3a09831934f737a9e2960cb" exitCode=0 Nov 29 08:12:45 crc kubenswrapper[4828]: I1129 08:12:45.272491 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" event={"ID":"2b21f980-cdb7-4eab-a1b6-4b402ed5b951","Type":"ContainerDied","Data":"c64b0b51c720b9cc1b3525be4fafec604ebbe4f6b3a09831934f737a9e2960cb"} Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.392089 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.424537 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-2qtrb"] Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.432984 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-2qtrb"] Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.517465 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host\") pod \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.517576 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host" (OuterVolumeSpecName: "host") pod "2b21f980-cdb7-4eab-a1b6-4b402ed5b951" (UID: "2b21f980-cdb7-4eab-a1b6-4b402ed5b951"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.517685 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvgsq\" (UniqueName: \"kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq\") pod \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\" (UID: \"2b21f980-cdb7-4eab-a1b6-4b402ed5b951\") " Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.518467 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.526519 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq" (OuterVolumeSpecName: "kube-api-access-jvgsq") pod "2b21f980-cdb7-4eab-a1b6-4b402ed5b951" (UID: "2b21f980-cdb7-4eab-a1b6-4b402ed5b951"). InnerVolumeSpecName "kube-api-access-jvgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:46 crc kubenswrapper[4828]: I1129 08:12:46.619620 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvgsq\" (UniqueName: \"kubernetes.io/projected/2b21f980-cdb7-4eab-a1b6-4b402ed5b951-kube-api-access-jvgsq\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.296354 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb80d5ea467ff2e113a814bd762e899e681d2fa6ac064c3cd522d6eebc516f4" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.296468 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-2qtrb" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.423373 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b21f980-cdb7-4eab-a1b6-4b402ed5b951" path="/var/lib/kubelet/pods/2b21f980-cdb7-4eab-a1b6-4b402ed5b951/volumes" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.591117 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-cj448"] Nov 29 08:12:47 crc kubenswrapper[4828]: E1129 08:12:47.591831 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b21f980-cdb7-4eab-a1b6-4b402ed5b951" containerName="container-00" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.591912 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b21f980-cdb7-4eab-a1b6-4b402ed5b951" containerName="container-00" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.592180 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b21f980-cdb7-4eab-a1b6-4b402ed5b951" containerName="container-00" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.592894 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.741251 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.741526 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4b2\" (UniqueName: \"kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.843652 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4b2\" (UniqueName: \"kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.844095 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.844175 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.862944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4b2\" (UniqueName: \"kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2\") pod \"crc-debug-cj448\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:47 crc kubenswrapper[4828]: I1129 08:12:47.909554 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:48 crc kubenswrapper[4828]: I1129 08:12:48.308610 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-cj448" event={"ID":"fd8dce24-43b9-4eab-b24c-94298332c745","Type":"ContainerStarted","Data":"bbf98ff1de190e9f3073e7f59974cfc54983af9a27c8b2010dd6ad5d5f1ecdfd"} Nov 29 08:12:48 crc kubenswrapper[4828]: I1129 08:12:48.308885 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-cj448" event={"ID":"fd8dce24-43b9-4eab-b24c-94298332c745","Type":"ContainerStarted","Data":"0c355ace0641c75aa0c19e787b2284583b71d716039757995a03d004ea521862"} Nov 29 08:12:48 crc kubenswrapper[4828]: I1129 08:12:48.331756 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pz8wj/crc-debug-cj448" podStartSLOduration=1.331730613 podStartE2EDuration="1.331730613s" podCreationTimestamp="2025-11-29 08:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:12:48.321599137 +0000 UTC m=+4307.943675195" watchObservedRunningTime="2025-11-29 08:12:48.331730613 +0000 UTC m=+4307.953806671" Nov 29 08:12:49 crc kubenswrapper[4828]: I1129 08:12:49.324624 4828 generic.go:334] "Generic (PLEG): container finished" podID="fd8dce24-43b9-4eab-b24c-94298332c745" containerID="bbf98ff1de190e9f3073e7f59974cfc54983af9a27c8b2010dd6ad5d5f1ecdfd" exitCode=0 Nov 29 08:12:49 crc kubenswrapper[4828]: I1129 08:12:49.324998 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-cj448" event={"ID":"fd8dce24-43b9-4eab-b24c-94298332c745","Type":"ContainerDied","Data":"bbf98ff1de190e9f3073e7f59974cfc54983af9a27c8b2010dd6ad5d5f1ecdfd"} Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.430760 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.465964 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-cj448"] Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.482509 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-cj448"] Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.488720 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host\") pod \"fd8dce24-43b9-4eab-b24c-94298332c745\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.488854 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq4b2\" (UniqueName: \"kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2\") pod \"fd8dce24-43b9-4eab-b24c-94298332c745\" (UID: \"fd8dce24-43b9-4eab-b24c-94298332c745\") " Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.490676 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host" (OuterVolumeSpecName: "host") pod "fd8dce24-43b9-4eab-b24c-94298332c745" (UID: "fd8dce24-43b9-4eab-b24c-94298332c745"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.496443 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2" (OuterVolumeSpecName: "kube-api-access-lq4b2") pod "fd8dce24-43b9-4eab-b24c-94298332c745" (UID: "fd8dce24-43b9-4eab-b24c-94298332c745"). InnerVolumeSpecName "kube-api-access-lq4b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.591454 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq4b2\" (UniqueName: \"kubernetes.io/projected/fd8dce24-43b9-4eab-b24c-94298332c745-kube-api-access-lq4b2\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:50 crc kubenswrapper[4828]: I1129 08:12:50.591491 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd8dce24-43b9-4eab-b24c-94298332c745-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.342419 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c355ace0641c75aa0c19e787b2284583b71d716039757995a03d004ea521862" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.342518 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-cj448" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.424581 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8dce24-43b9-4eab-b24c-94298332c745" path="/var/lib/kubelet/pods/fd8dce24-43b9-4eab-b24c-94298332c745/volumes" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.620575 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-bg8jg"] Nov 29 08:12:51 crc kubenswrapper[4828]: E1129 08:12:51.621038 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8dce24-43b9-4eab-b24c-94298332c745" containerName="container-00" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.621055 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8dce24-43b9-4eab-b24c-94298332c745" containerName="container-00" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.621327 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8dce24-43b9-4eab-b24c-94298332c745" containerName="container-00" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.622148 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.814133 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9l8\" (UniqueName: \"kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.814347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.916664 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.916824 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.916984 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9l8\" (UniqueName: \"kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.936069 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9l8\" (UniqueName: \"kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8\") pod \"crc-debug-bg8jg\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:51 crc kubenswrapper[4828]: I1129 08:12:51.938959 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:52 crc kubenswrapper[4828]: I1129 08:12:52.352285 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d374c2d-866c-4608-a891-87f698bab258" containerID="5b55126cfb8c4fb8988b9aadbeee3b715aa56fb720e12cecb690e8946cee5aee" exitCode=0 Nov 29 08:12:52 crc kubenswrapper[4828]: I1129 08:12:52.352337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" event={"ID":"5d374c2d-866c-4608-a891-87f698bab258","Type":"ContainerDied","Data":"5b55126cfb8c4fb8988b9aadbeee3b715aa56fb720e12cecb690e8946cee5aee"} Nov 29 08:12:52 crc kubenswrapper[4828]: I1129 08:12:52.352371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" event={"ID":"5d374c2d-866c-4608-a891-87f698bab258","Type":"ContainerStarted","Data":"0ba79290096ff6ad9a1d9e8770b25950becef2a2a5dc954f2384840c9cf88b9a"} Nov 29 08:12:52 crc kubenswrapper[4828]: I1129 08:12:52.387392 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-bg8jg"] Nov 29 08:12:52 crc kubenswrapper[4828]: I1129 08:12:52.395437 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pz8wj/crc-debug-bg8jg"] Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.466818 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.650444 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml9l8\" (UniqueName: \"kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8\") pod \"5d374c2d-866c-4608-a891-87f698bab258\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.650887 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host\") pod \"5d374c2d-866c-4608-a891-87f698bab258\" (UID: \"5d374c2d-866c-4608-a891-87f698bab258\") " Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.651087 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host" (OuterVolumeSpecName: "host") pod "5d374c2d-866c-4608-a891-87f698bab258" (UID: "5d374c2d-866c-4608-a891-87f698bab258"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.652185 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d374c2d-866c-4608-a891-87f698bab258-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.757882 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8" (OuterVolumeSpecName: "kube-api-access-ml9l8") pod "5d374c2d-866c-4608-a891-87f698bab258" (UID: "5d374c2d-866c-4608-a891-87f698bab258"). InnerVolumeSpecName "kube-api-access-ml9l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:53 crc kubenswrapper[4828]: I1129 08:12:53.856785 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml9l8\" (UniqueName: \"kubernetes.io/projected/5d374c2d-866c-4608-a891-87f698bab258-kube-api-access-ml9l8\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:54 crc kubenswrapper[4828]: I1129 08:12:54.377895 4828 scope.go:117] "RemoveContainer" containerID="5b55126cfb8c4fb8988b9aadbeee3b715aa56fb720e12cecb690e8946cee5aee" Nov 29 08:12:54 crc kubenswrapper[4828]: I1129 08:12:54.377914 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/crc-debug-bg8jg" Nov 29 08:12:55 crc kubenswrapper[4828]: I1129 08:12:55.426123 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d374c2d-866c-4608-a891-87f698bab258" path="/var/lib/kubelet/pods/5d374c2d-866c-4608-a891-87f698bab258/volumes" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.273650 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cc498b8c4-hstck_346a1de9-a6d0-451f-8ca9-172d43dc99f9/barbican-api/0.log" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.461411 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cc498b8c4-hstck_346a1de9-a6d0-451f-8ca9-172d43dc99f9/barbican-api-log/0.log" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.532874 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-69488889b8-dcf7m_a5b5741a-29b4-4c45-85c7-8c2cb55857a3/barbican-keystone-listener/0.log" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.594812 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-69488889b8-dcf7m_a5b5741a-29b4-4c45-85c7-8c2cb55857a3/barbican-keystone-listener-log/0.log" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.919091 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57b9f79f95-xdwsq_c9c61053-a1cc-4c19-9042-61c7e4cdaffe/barbican-worker/0.log" Nov 29 08:13:08 crc kubenswrapper[4828]: I1129 08:13:08.935083 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57b9f79f95-xdwsq_c9c61053-a1cc-4c19-9042-61c7e4cdaffe/barbican-worker-log/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.079790 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xpcg9_8525375e-b298-4e44-ae0b-9f26a3b1001a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.146664 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8d3ec51-1a59-47fd-96f9-d97022ca7fcd/ceilometer-central-agent/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.212149 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8d3ec51-1a59-47fd-96f9-d97022ca7fcd/ceilometer-notification-agent/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.288797 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8d3ec51-1a59-47fd-96f9-d97022ca7fcd/proxy-httpd/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.339980 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f8d3ec51-1a59-47fd-96f9-d97022ca7fcd/sg-core/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.476899 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0856d1d8-20d9-4558-98fd-f955bbc00df7/cinder-api/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.529520 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0856d1d8-20d9-4558-98fd-f955bbc00df7/cinder-api-log/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.652910 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9/cinder-scheduler/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.721614 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f8bed0cf-3cf4-4b63-bfc2-c2085a94edd9/probe/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.754087 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5ds8h_eb04df0e-e78b-4441-a2bd-76f7b0262653/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.929199 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2tkgm_55539b0e-2552-4e7c-89f0-e67ae0f38aba/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:09 crc kubenswrapper[4828]: I1129 08:13:09.965032 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-bffmg_a60b38bd-cee2-4ea6-840a-828961fde751/init/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.168206 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-bffmg_a60b38bd-cee2-4ea6-840a-828961fde751/init/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.234735 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-bffmg_a60b38bd-cee2-4ea6-840a-828961fde751/dnsmasq-dns/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.266622 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rgpxf_85ada4f9-8597-4409-9fc4-7f4dd3594fcf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.426195 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ecfa61e1-38ee-4cc5-80ac-093b1880135a/glance-httpd/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.437135 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ecfa61e1-38ee-4cc5-80ac-093b1880135a/glance-log/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.599055 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_04bda42c-062d-483d-872e-bd260cf2b4b4/glance-httpd/0.log" Nov 29 08:13:10 crc kubenswrapper[4828]: I1129 08:13:10.665819 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_04bda42c-062d-483d-872e-bd260cf2b4b4/glance-log/0.log" Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.298636 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7f579788cb-tbwlt_1cb551ca-3225-4ed7-9127-04f6a4abe792/heat-cfnapi/0.log" Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.312473 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-fd957fd8c-nfdrx_65ec8661-f29c-455c-b0b6-04aaaad39bda/heat-engine/0.log" Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.436374 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-59fbdb74df-c54jw_930ded64-8acc-4fc6-b729-034214fa160b/heat-api/0.log" Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.487358 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.487435 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:11 crc kubenswrapper[4828]: I1129 08:13:11.945683 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-h6k44_2da06014-9f35-43c6-88f3-7e9f6ffd3baf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:12 crc kubenswrapper[4828]: I1129 08:13:12.308778 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bj26s_ffcc2240-c156-4d2b-9500-1bf8015e5733/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:12 crc kubenswrapper[4828]: I1129 08:13:12.352958 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406721-tmr8c_84a419c7-486a-4b21-a023-c74395681e1d/keystone-cron/0.log" Nov 29 08:13:12 crc kubenswrapper[4828]: I1129 08:13:12.534808 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_02492ec5-a65e-4179-aff9-b5d25154f8d2/kube-state-metrics/0.log" Nov 29 08:13:12 crc kubenswrapper[4828]: I1129 08:13:12.633454 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-757484cf46-h2rvl_55ea6c63-9a3a-42da-92c0-08ba9bd1efbe/keystone-api/0.log" Nov 29 08:13:12 crc kubenswrapper[4828]: I1129 08:13:12.702449 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vxgw4_c081856d-532f-4357-958b-b4c2070abbbf/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:13 crc kubenswrapper[4828]: I1129 08:13:13.134400 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b768d757-5f2b9_3334d09a-df8a-448e-90a3-79f36ee70a07/neutron-httpd/0.log" Nov 29 08:13:13 crc kubenswrapper[4828]: I1129 08:13:13.160575 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6zk_6494a5a0-15bc-42c7-a812-8ca66317bea7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:13 crc kubenswrapper[4828]: I1129 08:13:13.194439 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b768d757-5f2b9_3334d09a-df8a-448e-90a3-79f36ee70a07/neutron-api/0.log" Nov 29 08:13:13 crc kubenswrapper[4828]: I1129 08:13:13.831031 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e53c7469-46ea-4683-97be-1b872217e983/nova-api-log/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.063692 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ff3c67db-7084-4abe-94f3-aafca06ae5e3/nova-cell0-conductor-conductor/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.103906 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e53c7469-46ea-4683-97be-1b872217e983/nova-api-api/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.172261 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c20767ac-ea5b-4bde-80f3-9e6355039f15/nova-cell1-conductor-conductor/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.448296 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_508b6f36-4c27-431d-aafa-94c8150647a4/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.459159 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xrjp7_839a08fc-14bb-4b73-8028-6dec803de923/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.755372 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_166ca75a-f156-4ce3-9a12-7b76ba38f92e/nova-metadata-log/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.855465 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7c8c420f-fb0c-4028-b5bc-7ed98c1d7d05/nova-scheduler-scheduler/0.log" Nov 29 08:13:14 crc kubenswrapper[4828]: I1129 08:13:14.990222 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f86097ba-a57f-4f34-8668-dc1daef612da/mysql-bootstrap/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.280960 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f86097ba-a57f-4f34-8668-dc1daef612da/galera/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.318867 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f86097ba-a57f-4f34-8668-dc1daef612da/mysql-bootstrap/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.503010 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bb49e4ad-de75-4a14-bbf3-f5bd0099add6/mysql-bootstrap/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.734023 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bb49e4ad-de75-4a14-bbf3-f5bd0099add6/mysql-bootstrap/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.739097 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bb49e4ad-de75-4a14-bbf3-f5bd0099add6/galera/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.919735 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_166ca75a-f156-4ce3-9a12-7b76ba38f92e/nova-metadata-metadata/0.log" Nov 29 08:13:15 crc kubenswrapper[4828]: I1129 08:13:15.929533 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_262aab08-d0cd-47a7-b913-c3df9daf6739/openstackclient/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.076928 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rhxqt_58916077-c611-4cd6-9b53-b668fa2abb47/openstack-network-exporter/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.264609 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hhg6w_48706635-ba41-45a3-8167-56c05555f0d2/ovsdb-server-init/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.449931 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hhg6w_48706635-ba41-45a3-8167-56c05555f0d2/ovsdb-server-init/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.451195 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hhg6w_48706635-ba41-45a3-8167-56c05555f0d2/ovs-vswitchd/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.497680 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hhg6w_48706635-ba41-45a3-8167-56c05555f0d2/ovsdb-server/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.633793 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-twdtp_5197fd5f-121f-4085-8985-a8e31ee8f997/ovn-controller/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.747707 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-894mk_3ba051a3-9160-4e2d-85b3-88f7c43c00c7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.826934 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31df9f28-9df3-4686-9aa5-ea45706459fb/openstack-network-exporter/0.log" Nov 29 08:13:16 crc kubenswrapper[4828]: I1129 08:13:16.943249 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31df9f28-9df3-4686-9aa5-ea45706459fb/ovn-northd/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.066723 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc069e9b-6fbd-427b-bc62-b99d31c5292d/openstack-network-exporter/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.141097 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc069e9b-6fbd-427b-bc62-b99d31c5292d/ovsdbserver-nb/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.291176 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2df4c7c-de4a-48b4-99b8-e66672e38e3d/ovsdbserver-sb/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.304667 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2df4c7c-de4a-48b4-99b8-e66672e38e3d/openstack-network-exporter/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.511684 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9777c7b8-ctgxk_32bb4de2-38a8-4361-9f97-d2932fc3bba6/placement-api/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.602558 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9777c7b8-ctgxk_32bb4de2-38a8-4361-9f97-d2932fc3bba6/placement-log/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.730081 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d69c925-6be3-4e39-8aa5-0e27cf8693cb/setup-container/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.921415 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d69c925-6be3-4e39-8aa5-0e27cf8693cb/rabbitmq/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.948452 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d69c925-6be3-4e39-8aa5-0e27cf8693cb/setup-container/0.log" Nov 29 08:13:17 crc kubenswrapper[4828]: I1129 08:13:17.991039 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcd1ae34-ece0-4632-8783-40db599d9ec4/setup-container/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.179398 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcd1ae34-ece0-4632-8783-40db599d9ec4/rabbitmq/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.251947 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcd1ae34-ece0-4632-8783-40db599d9ec4/setup-container/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.283903 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zhb6r_73db3b43-20c5-4549-9414-3a352d30b599/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.507397 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rhzsw_a091a008-dd3d-4c3f-be97-ac7b35c7c52a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.575027 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-p6jks_fe5d998b-174d-4669-b989-38c40f97ed4b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.766532 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qz5xt_933e9cb6-fe3b-4e84-869c-ee299d147048/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:18 crc kubenswrapper[4828]: I1129 08:13:18.788376 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qj8wj_5dadc365-25bc-43a9-9e8a-c17749832d20/ssh-known-hosts-edpm-deployment/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.044427 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c8bd5b56c-6wm6v_ffaa931d-e049-475f-8a3a-95cdf41bf40f/proxy-server/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.143357 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c8bd5b56c-6wm6v_ffaa931d-e049-475f-8a3a-95cdf41bf40f/proxy-httpd/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.161560 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-xb8hk_7c12ad5a-3768-4925-84dc-83e3733f4a49/swift-ring-rebalance/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.371494 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/account-auditor/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.389254 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/account-reaper/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.413948 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/account-replicator/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.514546 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/account-server/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.629878 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/container-auditor/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.660513 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/container-server/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.700718 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/container-replicator/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.810476 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/container-updater/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.886646 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/object-auditor/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.891523 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/object-expirer/0.log" Nov 29 08:13:19 crc kubenswrapper[4828]: I1129 08:13:19.963572 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/object-replicator/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.008243 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/object-server/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.100202 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/object-updater/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.160904 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/rsync/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.244382 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed93966d-a9d0-456c-b459-f06703deef71/swift-recon-cron/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.433457 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rgqn9_38983969-7980-489d-973e-2d4bc3de2420/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.538633 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fcc5d6b3-273c-4e60-9f6f-fb2a4d97b5da/tempest-tests-tempest-tests-runner/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.664033 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9f67c663-1885-40e8-94c2-f35ac8e7a0f1/test-operator-logs-container/0.log" Nov 29 08:13:20 crc kubenswrapper[4828]: I1129 08:13:20.937993 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8jcc6_f1c81965-17fb-40fe-bc15-a75f50a27eb8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:13:32 crc kubenswrapper[4828]: I1129 08:13:32.081383 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f120af9-3005-49a2-9099-818ef49164dc/memcached/0.log" Nov 29 08:13:41 crc kubenswrapper[4828]: I1129 08:13:41.530888 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:41 crc kubenswrapper[4828]: I1129 08:13:41.531555 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.332840 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/util/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.576889 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/util/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.579520 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/pull/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.584878 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/pull/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.750779 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/pull/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.774800 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/util/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.802081 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c873e4f1774ec98d76352049a6e75033ca704211665965a7bc59a897a42rvh_bb3be18a-9791-4cc9-92bf-685171bfdaf9/extract/0.log" Nov 29 08:13:49 crc kubenswrapper[4828]: I1129 08:13:49.989525 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jrkpv_f53d1403-e6c3-4696-bc32-7b711c38083e/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.028660 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jrkpv_f53d1403-e6c3-4696-bc32-7b711c38083e/manager/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.099187 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s9ddc_29d5d952-52dc-4a17-8f00-fa65fda896d0/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.262169 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s9ddc_29d5d952-52dc-4a17-8f00-fa65fda896d0/manager/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.292157 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-dkprw_a54ef84a-2f7d-47be-a9fd-699a627b3d91/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.335428 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-dkprw_a54ef84a-2f7d-47be-a9fd-699a627b3d91/manager/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.477696 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-mvwrz_1048c045-97cc-4506-a0ad-48a8f47366e5/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.626468 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-mvwrz_1048c045-97cc-4506-a0ad-48a8f47366e5/manager/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.720884 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-f569bc5bd-7n76r_8152b24c-fd27-443d-a35e-1ca6e4a5cf3e/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.787145 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-f569bc5bd-7n76r_8152b24c-fd27-443d-a35e-1ca6e4a5cf3e/manager/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.837809 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-qpxtq_741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7/kube-rbac-proxy/0.log" Nov 29 08:13:50 crc kubenswrapper[4828]: I1129 08:13:50.972631 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-qpxtq_741b8a30-2d40-4d2d-b2ee-3ed44cc95ff7/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.016641 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ntfvp_57cd6967-e631-48d7-bbd4-856ac77f592b/kube-rbac-proxy/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.243837 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-xtdv5_5d8c92ab-128c-41fa-8ae1-25b2c0776232/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.264097 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-xtdv5_5d8c92ab-128c-41fa-8ae1-25b2c0776232/kube-rbac-proxy/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.293656 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ntfvp_57cd6967-e631-48d7-bbd4-856ac77f592b/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.462687 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-s9v88_2793e6a5-22f6-4562-8253-c7c6993728fc/kube-rbac-proxy/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.539136 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-s9v88_2793e6a5-22f6-4562-8253-c7c6993728fc/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.631328 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-zr4sc_0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.680906 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-zr4sc_0214dfe6-a1ff-4588-a6d5-91d7c3c52a2d/kube-rbac-proxy/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.741201 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-c98mh_7264f040-a8ce-49f1-8422-0b5d03b79531/kube-rbac-proxy/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.865794 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-c98mh_7264f040-a8ce-49f1-8422-0b5d03b79531/manager/0.log" Nov 29 08:13:51 crc kubenswrapper[4828]: I1129 08:13:51.933746 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jsbsw_cd5cbb55-3997-45b7-9452-63f8354cf069/kube-rbac-proxy/0.log" Nov 29 08:13:52 crc kubenswrapper[4828]: I1129 08:13:52.009715 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jsbsw_cd5cbb55-3997-45b7-9452-63f8354cf069/manager/0.log" Nov 29 08:13:52 crc kubenswrapper[4828]: I1129 08:13:52.105840 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-2hkcb_a764d93d-518d-46ef-b135-eae7f3b02985/kube-rbac-proxy/0.log" Nov 29 08:13:52 crc kubenswrapper[4828]: I1129 08:13:52.924819 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-69cvg_8912a20d-9515-4c18-8e19-009876be37d9/kube-rbac-proxy/0.log" Nov 29 08:13:52 crc kubenswrapper[4828]: I1129 08:13:52.937747 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-69cvg_8912a20d-9515-4c18-8e19-009876be37d9/manager/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.025781 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-2hkcb_a764d93d-518d-46ef-b135-eae7f3b02985/manager/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.111500 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b_b680fae3-b615-465f-bea9-d61a847a6038/kube-rbac-proxy/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.196827 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4xtt9b_b680fae3-b615-465f-bea9-d61a847a6038/manager/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.520593 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qgmcb_1819d352-6ff1-4f6a-9a9f-899c6e045c19/registry-server/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.625493 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7f7c9dc57b-dhcn7_e839e496-a573-4f7b-819e-5a8f24c20689/operator/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.739773 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-s887g_ef13c53a-b7d2-46e7-aabc-37091112d6c6/kube-rbac-proxy/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.824524 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-s887g_ef13c53a-b7d2-46e7-aabc-37091112d6c6/manager/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.909371 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-r6mpw_7911a66c-1116-4db9-9343-548d40f54e90/kube-rbac-proxy/0.log" Nov 29 08:13:53 crc kubenswrapper[4828]: I1129 08:13:53.989900 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-r6mpw_7911a66c-1116-4db9-9343-548d40f54e90/manager/0.log" Nov 29 08:13:54 crc kubenswrapper[4828]: I1129 08:13:54.097249 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjtrb_7c2f01b9-cbfb-4781-bd51-2ab29504eafa/operator/0.log" Nov 29 08:13:54 crc kubenswrapper[4828]: I1129 08:13:54.793731 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xg8sj_a4f6c7bc-09b0-4dda-bd88-76ee93e0a907/kube-rbac-proxy/0.log" Nov 29 08:13:54 crc kubenswrapper[4828]: I1129 08:13:54.828986 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7769b678c8-gjkl6_5b74289e-ed4b-4af7-b250-7b660b9c9102/manager/0.log" Nov 29 08:13:54 crc kubenswrapper[4828]: I1129 08:13:54.888890 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xg8sj_a4f6c7bc-09b0-4dda-bd88-76ee93e0a907/manager/0.log" Nov 29 08:13:54 crc kubenswrapper[4828]: I1129 08:13:54.938143 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-pkfzx_741effc8-8c8a-420e-b6c0-0b62ebc9bdbf/kube-rbac-proxy/0.log" Nov 29 08:13:55 crc kubenswrapper[4828]: I1129 08:13:55.051754 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-pkfzx_741effc8-8c8a-420e-b6c0-0b62ebc9bdbf/manager/0.log" Nov 29 08:13:55 crc kubenswrapper[4828]: I1129 08:13:55.079571 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-8rrwm_7c7879c2-7253-4728-96b9-44c431d99fd4/kube-rbac-proxy/0.log" Nov 29 08:13:55 crc kubenswrapper[4828]: I1129 08:13:55.154230 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-8rrwm_7c7879c2-7253-4728-96b9-44c431d99fd4/manager/0.log" Nov 29 08:13:55 crc kubenswrapper[4828]: I1129 08:13:55.272958 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-kzq8x_98dc3704-84a8-46b5-aa13-f9de4ebde0a7/manager/0.log" Nov 29 08:13:55 crc kubenswrapper[4828]: I1129 08:13:55.287539 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-kzq8x_98dc3704-84a8-46b5-aa13-f9de4ebde0a7/kube-rbac-proxy/0.log" Nov 29 08:14:11 crc kubenswrapper[4828]: I1129 08:14:11.487542 4828 patch_prober.go:28] interesting pod/machine-config-daemon-dgclj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:14:11 crc kubenswrapper[4828]: I1129 08:14:11.488072 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:14:11 crc kubenswrapper[4828]: I1129 08:14:11.488116 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" Nov 29 08:14:11 crc kubenswrapper[4828]: I1129 08:14:11.488704 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440"} pod="openshift-machine-config-operator/machine-config-daemon-dgclj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:14:11 crc kubenswrapper[4828]: I1129 08:14:11.488759 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerName="machine-config-daemon" containerID="cri-o://2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" gracePeriod=600 Nov 29 08:14:11 crc kubenswrapper[4828]: E1129 08:14:11.616365 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:14:12 crc kubenswrapper[4828]: I1129 08:14:12.162113 4828 generic.go:334] "Generic (PLEG): container finished" podID="ce72f1df-15a3-475b-918b-9076a0d9c29c" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" exitCode=0 Nov 29 08:14:12 crc kubenswrapper[4828]: I1129 08:14:12.162318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerDied","Data":"2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440"} Nov 29 08:14:12 crc kubenswrapper[4828]: I1129 08:14:12.162490 4828 scope.go:117] "RemoveContainer" containerID="2854f093e8173a31c342fd9d9b1c784552e1e835ffc9707a0d4b30a8926c5a1d" Nov 29 08:14:12 crc kubenswrapper[4828]: I1129 08:14:12.163152 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:14:12 crc kubenswrapper[4828]: E1129 08:14:12.163549 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:14:16 crc kubenswrapper[4828]: I1129 08:14:16.923959 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s2ds9_9a7e6cb9-6c64-425d-92fe-f067a47489ac/control-plane-machine-set-operator/0.log" Nov 29 08:14:17 crc kubenswrapper[4828]: I1129 08:14:17.128465 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7njjk_ec00b335-adab-4b39-a98e-b68fdb402a27/machine-api-operator/0.log" Nov 29 08:14:17 crc kubenswrapper[4828]: I1129 08:14:17.139479 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7njjk_ec00b335-adab-4b39-a98e-b68fdb402a27/kube-rbac-proxy/0.log" Nov 29 08:14:26 crc kubenswrapper[4828]: I1129 08:14:26.411962 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:14:26 crc kubenswrapper[4828]: E1129 08:14:26.412838 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:14:29 crc kubenswrapper[4828]: I1129 08:14:29.812920 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-g2ms6_f01b92db-d046-4b1c-a23a-84250830a957/cert-manager-controller/0.log" Nov 29 08:14:30 crc kubenswrapper[4828]: I1129 08:14:30.012662 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-57drp_eb60407c-21f0-49e3-87b6-dca32ff366b6/cert-manager-cainjector/0.log" Nov 29 08:14:30 crc kubenswrapper[4828]: I1129 08:14:30.058426 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9vbgx_dd2e3aba-f27e-4366-a84e-ed3de11ab39a/cert-manager-webhook/0.log" Nov 29 08:14:39 crc kubenswrapper[4828]: I1129 08:14:39.411887 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:14:39 crc kubenswrapper[4828]: E1129 08:14:39.412649 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.159285 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-srhqh_e2d27739-6c6a-49c9-8032-4b206f20007e/nmstate-console-plugin/0.log" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.295351 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zcc72_9cff6462-3fcb-4ea2-8d92-6ff9c616313b/nmstate-handler/0.log" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.446671 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tzzc4_e0556ed8-0627-45a6-9c96-3deae542a208/nmstate-metrics/0.log" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.463286 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tzzc4_e0556ed8-0627-45a6-9c96-3deae542a208/kube-rbac-proxy/0.log" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.663152 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ghxdc_2b65ae71-1e9f-439c-9c5c-8980083ea513/nmstate-operator/0.log" Nov 29 08:14:44 crc kubenswrapper[4828]: I1129 08:14:44.729492 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-kr5n6_b6b458f9-e87c-4841-bb7e-a62e1a283434/nmstate-webhook/0.log" Nov 29 08:14:50 crc kubenswrapper[4828]: I1129 08:14:50.411552 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:14:50 crc kubenswrapper[4828]: E1129 08:14:50.412248 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.191731 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5"] Nov 29 08:15:00 crc kubenswrapper[4828]: E1129 08:15:00.192705 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d374c2d-866c-4608-a891-87f698bab258" containerName="container-00" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.192719 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d374c2d-866c-4608-a891-87f698bab258" containerName="container-00" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.192935 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d374c2d-866c-4608-a891-87f698bab258" containerName="container-00" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.193631 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.202912 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5"] Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.202950 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.209714 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.297090 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.297299 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.297355 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjgbm\" (UniqueName: \"kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.399386 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.399483 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjgbm\" (UniqueName: \"kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.399533 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.400350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.413088 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.418988 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjgbm\" (UniqueName: \"kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm\") pod \"collect-profiles-29406735-8mht5\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:00 crc kubenswrapper[4828]: I1129 08:15:00.516107 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.140999 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5"] Nov 29 08:15:01 crc kubenswrapper[4828]: W1129 08:15:01.143349 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8a7b8cc_a73b_4dde_ab18_dfbd072ca710.slice/crio-a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f WatchSource:0}: Error finding container a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f: Status 404 returned error can't find the container with id a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.246988 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-r78sm_343eaf08-7337-45bd-90e6-650984143598/kube-rbac-proxy/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.297183 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-r78sm_343eaf08-7337-45bd-90e6-650984143598/controller/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.426877 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:15:01 crc kubenswrapper[4828]: E1129 08:15:01.427250 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.493681 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-frr-files/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.607547 4828 generic.go:334] "Generic (PLEG): container finished" podID="f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" containerID="0a4a57dd6a584565e58b3c5d1008853fc6d8a237889a926e446966732dd4b2d3" exitCode=0 Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.607603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" event={"ID":"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710","Type":"ContainerDied","Data":"0a4a57dd6a584565e58b3c5d1008853fc6d8a237889a926e446966732dd4b2d3"} Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.607860 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" event={"ID":"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710","Type":"ContainerStarted","Data":"a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f"} Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.718570 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-frr-files/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.768173 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-metrics/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.768606 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-reloader/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.793096 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-reloader/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.949222 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-frr-files/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.960208 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-metrics/0.log" Nov 29 08:15:01 crc kubenswrapper[4828]: I1129 08:15:01.988789 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-reloader/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.042357 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-metrics/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.263142 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-frr-files/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.289417 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-metrics/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.300327 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/controller/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.301764 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/cp-reloader/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.497589 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/kube-rbac-proxy/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.497717 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/kube-rbac-proxy-frr/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.550932 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/frr-metrics/0.log" Nov 29 08:15:02 crc kubenswrapper[4828]: I1129 08:15:02.727764 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/reloader/0.log" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.218813 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.295576 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-klm74_c683344b-cd77-447f-b375-c83eb16100b6/frr-k8s-webhook-server/0.log" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.362039 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume\") pod \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.362145 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjgbm\" (UniqueName: \"kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm\") pod \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.362299 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume\") pod \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\" (UID: \"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710\") " Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.366002 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" (UID: "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.370207 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm" (OuterVolumeSpecName: "kube-api-access-cjgbm") pod "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" (UID: "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710"). InnerVolumeSpecName "kube-api-access-cjgbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.374587 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" (UID: "f8a7b8cc-a73b-4dde-ab18-dfbd072ca710"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.464794 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.464831 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjgbm\" (UniqueName: \"kubernetes.io/projected/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-kube-api-access-cjgbm\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.464858 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a7b8cc-a73b-4dde-ab18-dfbd072ca710-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.599208 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64dc5dd5cf-sbhrw_7dc202de-98db-4521-9ab3-a67ce9dff293/manager/0.log" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.624639 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" event={"ID":"f8a7b8cc-a73b-4dde-ab18-dfbd072ca710","Type":"ContainerDied","Data":"a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f"} Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.624694 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c58bfc50e9074827c1a7f1c88b8ba8eceeb5cb553d67dcd8000dd4589fdc1f" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.624714 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-8mht5" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.638062 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56cbcf7d78-fnjp2_4fc1cb75-193e-440a-a790-2fde8aa47103/webhook-server/0.log" Nov 29 08:15:03 crc kubenswrapper[4828]: I1129 08:15:03.858759 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kjddj_48a10b21-758a-47e9-8a65-1b6c9b6ba62a/kube-rbac-proxy/0.log" Nov 29 08:15:04 crc kubenswrapper[4828]: I1129 08:15:04.206366 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-86b52_c4f218ed-01de-4cf0-a800-ca644528acc3/frr/0.log" Nov 29 08:15:04 crc kubenswrapper[4828]: I1129 08:15:04.332372 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk"] Nov 29 08:15:04 crc kubenswrapper[4828]: I1129 08:15:04.359790 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-chjbk"] Nov 29 08:15:04 crc kubenswrapper[4828]: I1129 08:15:04.380891 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kjddj_48a10b21-758a-47e9-8a65-1b6c9b6ba62a/speaker/0.log" Nov 29 08:15:05 crc kubenswrapper[4828]: I1129 08:15:05.426131 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b708966-6ad1-4b32-abe6-097320e1b348" path="/var/lib/kubelet/pods/8b708966-6ad1-4b32-abe6-097320e1b348/volumes" Nov 29 08:15:12 crc kubenswrapper[4828]: I1129 08:15:12.412455 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:15:12 crc kubenswrapper[4828]: E1129 08:15:12.413641 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:13 crc kubenswrapper[4828]: I1129 08:15:13.867918 4828 scope.go:117] "RemoveContainer" containerID="0f5059304e2a77966ade9ab64f5326c1c9dec7e20eb0b26278c3d6f928b56de4" Nov 29 08:15:17 crc kubenswrapper[4828]: I1129 08:15:17.527253 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/util/0.log" Nov 29 08:15:17 crc kubenswrapper[4828]: I1129 08:15:17.749980 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/pull/0.log" Nov 29 08:15:17 crc kubenswrapper[4828]: I1129 08:15:17.765422 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/util/0.log" Nov 29 08:15:17 crc kubenswrapper[4828]: I1129 08:15:17.778411 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/pull/0.log" Nov 29 08:15:17 crc kubenswrapper[4828]: I1129 08:15:17.984636 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/extract/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.001615 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/util/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.045226 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhjdjn_d9a1e372-3dc1-4ea6-a97d-e1e64d154ba2/pull/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.182720 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/util/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.349941 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/pull/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.358133 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/util/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.394612 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/pull/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.536551 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/pull/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.546976 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/util/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.550787 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gjts9_f7790c72-dd1d-405c-8360-a63989834be8/extract/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.775596 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-utilities/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.883952 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-utilities/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.899682 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-content/0.log" Nov 29 08:15:18 crc kubenswrapper[4828]: I1129 08:15:18.935091 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-content/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.095096 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-utilities/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.176485 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/extract-content/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.390593 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-utilities/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.672743 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-content/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.845126 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84trl_fc085063-478e-40a4-8810-f62d1d6bfa64/registry-server/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.860442 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-utilities/0.log" Nov 29 08:15:19 crc kubenswrapper[4828]: I1129 08:15:19.894252 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-content/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.015992 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-utilities/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.064492 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/extract-content/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.301105 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zqxp4_8d6f6ac7-9c5b-4828-98e7-d047f395ff83/marketplace-operator/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.315922 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bhkvr_f3a82506-2db4-42bb-9aa7-db19ebf97f06/registry-server/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.322575 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-utilities/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.649871 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-utilities/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.661718 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-content/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.673810 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-content/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.865949 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-utilities/0.log" Nov 29 08:15:20 crc kubenswrapper[4828]: I1129 08:15:20.899512 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/extract-content/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.077720 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9grx_65a880c4-a44a-4fba-9f14-845905e54799/registry-server/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.131420 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-utilities/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.265385 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-content/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.266114 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-utilities/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.287769 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-content/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.490775 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-content/0.log" Nov 29 08:15:21 crc kubenswrapper[4828]: I1129 08:15:21.504444 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/extract-utilities/0.log" Nov 29 08:15:22 crc kubenswrapper[4828]: I1129 08:15:22.097163 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7vccf_4766d758-11c2-400b-89fd-4b1de688f74d/registry-server/0.log" Nov 29 08:15:23 crc kubenswrapper[4828]: I1129 08:15:23.412479 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:15:23 crc kubenswrapper[4828]: E1129 08:15:23.413159 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:36 crc kubenswrapper[4828]: I1129 08:15:36.412476 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:15:36 crc kubenswrapper[4828]: E1129 08:15:36.413316 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:44 crc kubenswrapper[4828]: E1129 08:15:44.342841 4828 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.96:39214->38.129.56.96:45631: write tcp 38.129.56.96:39214->38.129.56.96:45631: write: broken pipe Nov 29 08:15:51 crc kubenswrapper[4828]: I1129 08:15:51.419402 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:15:51 crc kubenswrapper[4828]: E1129 08:15:51.420147 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:15:58 crc kubenswrapper[4828]: E1129 08:15:58.673387 4828 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.96:50820->38.129.56.96:45631: write tcp 38.129.56.96:50820->38.129.56.96:45631: write: broken pipe Nov 29 08:16:04 crc kubenswrapper[4828]: I1129 08:16:04.411994 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:16:04 crc kubenswrapper[4828]: E1129 08:16:04.412912 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:16:15 crc kubenswrapper[4828]: I1129 08:16:15.412398 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:16:15 crc kubenswrapper[4828]: E1129 08:16:15.413160 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:16:30 crc kubenswrapper[4828]: I1129 08:16:30.412326 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:16:30 crc kubenswrapper[4828]: E1129 08:16:30.413011 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:16:43 crc kubenswrapper[4828]: I1129 08:16:43.413288 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:16:43 crc kubenswrapper[4828]: E1129 08:16:43.416375 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:16:56 crc kubenswrapper[4828]: I1129 08:16:56.413286 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:16:56 crc kubenswrapper[4828]: E1129 08:16:56.414043 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:17:09 crc kubenswrapper[4828]: I1129 08:17:09.412110 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:17:09 crc kubenswrapper[4828]: E1129 08:17:09.413062 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:17:19 crc kubenswrapper[4828]: I1129 08:17:19.967590 4828 generic.go:334] "Generic (PLEG): container finished" podID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerID="b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4" exitCode=0 Nov 29 08:17:19 crc kubenswrapper[4828]: I1129 08:17:19.967712 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" event={"ID":"3f0b8db8-c2d6-41c8-bf9d-904788239b26","Type":"ContainerDied","Data":"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4"} Nov 29 08:17:19 crc kubenswrapper[4828]: I1129 08:17:19.968794 4828 scope.go:117] "RemoveContainer" containerID="b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4" Nov 29 08:17:20 crc kubenswrapper[4828]: I1129 08:17:20.856021 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pz8wj_must-gather-m7x5t_3f0b8db8-c2d6-41c8-bf9d-904788239b26/gather/0.log" Nov 29 08:17:22 crc kubenswrapper[4828]: I1129 08:17:22.411516 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:17:22 crc kubenswrapper[4828]: E1129 08:17:22.412009 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.140447 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pz8wj/must-gather-m7x5t"] Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.141487 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="copy" containerID="cri-o://e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b" gracePeriod=2 Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.152314 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pz8wj/must-gather-m7x5t"] Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.637898 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pz8wj_must-gather-m7x5t_3f0b8db8-c2d6-41c8-bf9d-904788239b26/copy/0.log" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.638843 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.725826 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c522r\" (UniqueName: \"kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r\") pod \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.725878 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output\") pod \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\" (UID: \"3f0b8db8-c2d6-41c8-bf9d-904788239b26\") " Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.745021 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r" (OuterVolumeSpecName: "kube-api-access-c522r") pod "3f0b8db8-c2d6-41c8-bf9d-904788239b26" (UID: "3f0b8db8-c2d6-41c8-bf9d-904788239b26"). InnerVolumeSpecName "kube-api-access-c522r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.827979 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c522r\" (UniqueName: \"kubernetes.io/projected/3f0b8db8-c2d6-41c8-bf9d-904788239b26-kube-api-access-c522r\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.895071 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3f0b8db8-c2d6-41c8-bf9d-904788239b26" (UID: "3f0b8db8-c2d6-41c8-bf9d-904788239b26"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:29 crc kubenswrapper[4828]: I1129 08:17:29.930192 4828 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3f0b8db8-c2d6-41c8-bf9d-904788239b26-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.070186 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pz8wj_must-gather-m7x5t_3f0b8db8-c2d6-41c8-bf9d-904788239b26/copy/0.log" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.070567 4828 generic.go:334] "Generic (PLEG): container finished" podID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerID="e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b" exitCode=143 Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.070617 4828 scope.go:117] "RemoveContainer" containerID="e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.070668 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pz8wj/must-gather-m7x5t" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.096541 4828 scope.go:117] "RemoveContainer" containerID="b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.191788 4828 scope.go:117] "RemoveContainer" containerID="e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b" Nov 29 08:17:30 crc kubenswrapper[4828]: E1129 08:17:30.192200 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b\": container with ID starting with e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b not found: ID does not exist" containerID="e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.192250 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b"} err="failed to get container status \"e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b\": rpc error: code = NotFound desc = could not find container \"e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b\": container with ID starting with e1458cc588d0901c73dda90f9e9950a5261cb8a2502155c8b63b687ab4fd714b not found: ID does not exist" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.192300 4828 scope.go:117] "RemoveContainer" containerID="b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4" Nov 29 08:17:30 crc kubenswrapper[4828]: E1129 08:17:30.192632 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4\": container with ID starting with b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4 not found: ID does not exist" containerID="b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4" Nov 29 08:17:30 crc kubenswrapper[4828]: I1129 08:17:30.192673 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4"} err="failed to get container status \"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4\": rpc error: code = NotFound desc = could not find container \"b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4\": container with ID starting with b0eb5fe1bc7f0d04f98249af0682bf52cb9c8b9984051d644ed9ebd989ce62d4 not found: ID does not exist" Nov 29 08:17:31 crc kubenswrapper[4828]: I1129 08:17:31.424187 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" path="/var/lib/kubelet/pods/3f0b8db8-c2d6-41c8-bf9d-904788239b26/volumes" Nov 29 08:17:36 crc kubenswrapper[4828]: I1129 08:17:36.411794 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:17:36 crc kubenswrapper[4828]: E1129 08:17:36.412543 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:17:47 crc kubenswrapper[4828]: I1129 08:17:47.412496 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:17:47 crc kubenswrapper[4828]: E1129 08:17:47.413535 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:17:58 crc kubenswrapper[4828]: I1129 08:17:58.413609 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:17:58 crc kubenswrapper[4828]: E1129 08:17:58.414891 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:18:11 crc kubenswrapper[4828]: I1129 08:18:11.417917 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:18:11 crc kubenswrapper[4828]: E1129 08:18:11.418802 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:18:13 crc kubenswrapper[4828]: I1129 08:18:13.969083 4828 scope.go:117] "RemoveContainer" containerID="c64b0b51c720b9cc1b3525be4fafec604ebbe4f6b3a09831934f737a9e2960cb" Nov 29 08:18:25 crc kubenswrapper[4828]: I1129 08:18:25.412162 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:18:25 crc kubenswrapper[4828]: E1129 08:18:25.413321 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:18:36 crc kubenswrapper[4828]: I1129 08:18:36.411907 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:18:36 crc kubenswrapper[4828]: E1129 08:18:36.412714 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:18:51 crc kubenswrapper[4828]: I1129 08:18:51.425932 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:18:51 crc kubenswrapper[4828]: E1129 08:18:51.427465 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:19:04 crc kubenswrapper[4828]: I1129 08:19:04.411849 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:19:04 crc kubenswrapper[4828]: E1129 08:19:04.412756 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dgclj_openshift-machine-config-operator(ce72f1df-15a3-475b-918b-9076a0d9c29c)\"" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" podUID="ce72f1df-15a3-475b-918b-9076a0d9c29c" Nov 29 08:19:14 crc kubenswrapper[4828]: I1129 08:19:14.055994 4828 scope.go:117] "RemoveContainer" containerID="bbf98ff1de190e9f3073e7f59974cfc54983af9a27c8b2010dd6ad5d5f1ecdfd" Nov 29 08:19:15 crc kubenswrapper[4828]: I1129 08:19:15.411995 4828 scope.go:117] "RemoveContainer" containerID="2223d484b41706d13c2258f5de3a462ec131210f769114d1e76dfc9f44c8c440" Nov 29 08:19:16 crc kubenswrapper[4828]: I1129 08:19:16.073370 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dgclj" event={"ID":"ce72f1df-15a3-475b-918b-9076a0d9c29c","Type":"ContainerStarted","Data":"af4335e43411443e42088079697a320949dd24f2a74b6d0ee0ed6f5e34526893"} Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.539004 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:20:55 crc kubenswrapper[4828]: E1129 08:20:55.540221 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" containerName="collect-profiles" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540240 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" containerName="collect-profiles" Nov 29 08:20:55 crc kubenswrapper[4828]: E1129 08:20:55.540282 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="gather" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540294 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="gather" Nov 29 08:20:55 crc kubenswrapper[4828]: E1129 08:20:55.540319 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="copy" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540329 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="copy" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540657 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="copy" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540694 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0b8db8-c2d6-41c8-bf9d-904788239b26" containerName="gather" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.540724 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a7b8cc-a73b-4dde-ab18-dfbd072ca710" containerName="collect-profiles" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.549386 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.571037 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.643670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6j7\" (UniqueName: \"kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.644059 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.644158 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.746133 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6j7\" (UniqueName: \"kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.746198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.746287 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.746751 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.746857 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.765326 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6j7\" (UniqueName: \"kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7\") pod \"redhat-operators-bt88x\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:55 crc kubenswrapper[4828]: I1129 08:20:55.885956 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:20:56 crc kubenswrapper[4828]: I1129 08:20:56.374084 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:20:57 crc kubenswrapper[4828]: I1129 08:20:57.007208 4828 generic.go:334] "Generic (PLEG): container finished" podID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerID="db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52" exitCode=0 Nov 29 08:20:57 crc kubenswrapper[4828]: I1129 08:20:57.007443 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerDied","Data":"db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52"} Nov 29 08:20:57 crc kubenswrapper[4828]: I1129 08:20:57.007548 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerStarted","Data":"fbb59b5f9fd0db059467821a0305614b28498bac67c52c9d3c3b3cf4cc29e8b3"} Nov 29 08:20:57 crc kubenswrapper[4828]: I1129 08:20:57.010991 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:20:58 crc kubenswrapper[4828]: I1129 08:20:58.016971 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerStarted","Data":"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227"} Nov 29 08:21:01 crc kubenswrapper[4828]: I1129 08:21:01.046732 4828 generic.go:334] "Generic (PLEG): container finished" podID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerID="6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227" exitCode=0 Nov 29 08:21:01 crc kubenswrapper[4828]: I1129 08:21:01.046801 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerDied","Data":"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227"} Nov 29 08:21:02 crc kubenswrapper[4828]: I1129 08:21:02.063315 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerStarted","Data":"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f"} Nov 29 08:21:02 crc kubenswrapper[4828]: I1129 08:21:02.092529 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bt88x" podStartSLOduration=2.587017915 podStartE2EDuration="7.09250544s" podCreationTimestamp="2025-11-29 08:20:55 +0000 UTC" firstStartedPulling="2025-11-29 08:20:57.010664882 +0000 UTC m=+4796.632740940" lastFinishedPulling="2025-11-29 08:21:01.516152407 +0000 UTC m=+4801.138228465" observedRunningTime="2025-11-29 08:21:02.086716134 +0000 UTC m=+4801.708792192" watchObservedRunningTime="2025-11-29 08:21:02.09250544 +0000 UTC m=+4801.714581498" Nov 29 08:21:05 crc kubenswrapper[4828]: I1129 08:21:05.886232 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:05 crc kubenswrapper[4828]: I1129 08:21:05.886995 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:06 crc kubenswrapper[4828]: I1129 08:21:06.935792 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bt88x" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="registry-server" probeResult="failure" output=< Nov 29 08:21:06 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Nov 29 08:21:06 crc kubenswrapper[4828]: > Nov 29 08:21:15 crc kubenswrapper[4828]: I1129 08:21:15.963775 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:16 crc kubenswrapper[4828]: I1129 08:21:16.025254 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:16 crc kubenswrapper[4828]: I1129 08:21:16.203830 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.189131 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bt88x" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="registry-server" containerID="cri-o://d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f" gracePeriod=2 Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.665733 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.849675 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content\") pod \"e21bbb9f-33a8-4835-8de3-10c658d580d6\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.855708 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities\") pod \"e21bbb9f-33a8-4835-8de3-10c658d580d6\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.855783 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w6j7\" (UniqueName: \"kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7\") pod \"e21bbb9f-33a8-4835-8de3-10c658d580d6\" (UID: \"e21bbb9f-33a8-4835-8de3-10c658d580d6\") " Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.858471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities" (OuterVolumeSpecName: "utilities") pod "e21bbb9f-33a8-4835-8de3-10c658d580d6" (UID: "e21bbb9f-33a8-4835-8de3-10c658d580d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.877548 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7" (OuterVolumeSpecName: "kube-api-access-9w6j7") pod "e21bbb9f-33a8-4835-8de3-10c658d580d6" (UID: "e21bbb9f-33a8-4835-8de3-10c658d580d6"). InnerVolumeSpecName "kube-api-access-9w6j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.959723 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.959777 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w6j7\" (UniqueName: \"kubernetes.io/projected/e21bbb9f-33a8-4835-8de3-10c658d580d6-kube-api-access-9w6j7\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:17 crc kubenswrapper[4828]: I1129 08:21:17.981385 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e21bbb9f-33a8-4835-8de3-10c658d580d6" (UID: "e21bbb9f-33a8-4835-8de3-10c658d580d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.062527 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21bbb9f-33a8-4835-8de3-10c658d580d6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.201092 4828 generic.go:334] "Generic (PLEG): container finished" podID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerID="d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f" exitCode=0 Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.201147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerDied","Data":"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f"} Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.201180 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt88x" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.201221 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt88x" event={"ID":"e21bbb9f-33a8-4835-8de3-10c658d580d6","Type":"ContainerDied","Data":"fbb59b5f9fd0db059467821a0305614b28498bac67c52c9d3c3b3cf4cc29e8b3"} Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.201260 4828 scope.go:117] "RemoveContainer" containerID="d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.230815 4828 scope.go:117] "RemoveContainer" containerID="6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.267007 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.271604 4828 scope.go:117] "RemoveContainer" containerID="db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.282603 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bt88x"] Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.311422 4828 scope.go:117] "RemoveContainer" containerID="d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f" Nov 29 08:21:18 crc kubenswrapper[4828]: E1129 08:21:18.311822 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f\": container with ID starting with d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f not found: ID does not exist" containerID="d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.311929 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f"} err="failed to get container status \"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f\": rpc error: code = NotFound desc = could not find container \"d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f\": container with ID starting with d0f728b81962e1bca614d37f76e0d2758741e4d68406cadbbb5f2ece4cf3045f not found: ID does not exist" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.312016 4828 scope.go:117] "RemoveContainer" containerID="6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227" Nov 29 08:21:18 crc kubenswrapper[4828]: E1129 08:21:18.312680 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227\": container with ID starting with 6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227 not found: ID does not exist" containerID="6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.312704 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227"} err="failed to get container status \"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227\": rpc error: code = NotFound desc = could not find container \"6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227\": container with ID starting with 6f2e78cc34147dc7c865a365bb37003667331cf26b992b9b8ad0ecd62b240227 not found: ID does not exist" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.312718 4828 scope.go:117] "RemoveContainer" containerID="db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52" Nov 29 08:21:18 crc kubenswrapper[4828]: E1129 08:21:18.312895 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52\": container with ID starting with db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52 not found: ID does not exist" containerID="db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52" Nov 29 08:21:18 crc kubenswrapper[4828]: I1129 08:21:18.312990 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52"} err="failed to get container status \"db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52\": rpc error: code = NotFound desc = could not find container \"db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52\": container with ID starting with db234ac120cad845a4d4ba6c87832d7ac034ddd66b4394474b030e026ae68c52 not found: ID does not exist" Nov 29 08:21:19 crc kubenswrapper[4828]: I1129 08:21:19.424962 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" path="/var/lib/kubelet/pods/e21bbb9f-33a8-4835-8de3-10c658d580d6/volumes" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.930461 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tddns"] Nov 29 08:21:34 crc kubenswrapper[4828]: E1129 08:21:34.931465 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="extract-content" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.931481 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="extract-content" Nov 29 08:21:34 crc kubenswrapper[4828]: E1129 08:21:34.931491 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="extract-utilities" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.931498 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="extract-utilities" Nov 29 08:21:34 crc kubenswrapper[4828]: E1129 08:21:34.931513 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="registry-server" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.931520 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="registry-server" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.931722 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21bbb9f-33a8-4835-8de3-10c658d580d6" containerName="registry-server" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.933251 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:34 crc kubenswrapper[4828]: I1129 08:21:34.948983 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tddns"] Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.010066 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-catalog-content\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.010479 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272lb\" (UniqueName: \"kubernetes.io/projected/666645df-b883-44e6-95a2-90815121e061-kube-api-access-272lb\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.010541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-utilities\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.111711 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-catalog-content\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.111812 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272lb\" (UniqueName: \"kubernetes.io/projected/666645df-b883-44e6-95a2-90815121e061-kube-api-access-272lb\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.111871 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-utilities\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.112261 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-catalog-content\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.112342 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666645df-b883-44e6-95a2-90815121e061-utilities\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.263658 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272lb\" (UniqueName: \"kubernetes.io/projected/666645df-b883-44e6-95a2-90815121e061-kube-api-access-272lb\") pod \"community-operators-tddns\" (UID: \"666645df-b883-44e6-95a2-90815121e061\") " pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.270315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tddns" Nov 29 08:21:35 crc kubenswrapper[4828]: I1129 08:21:35.971359 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tddns"] Nov 29 08:21:36 crc kubenswrapper[4828]: I1129 08:21:36.443161 4828 generic.go:334] "Generic (PLEG): container finished" podID="666645df-b883-44e6-95a2-90815121e061" containerID="2ff57c13c9206092847e7af4f6cd451b52f873f269f55bbce314ed98551db610" exitCode=0 Nov 29 08:21:36 crc kubenswrapper[4828]: I1129 08:21:36.443362 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tddns" event={"ID":"666645df-b883-44e6-95a2-90815121e061","Type":"ContainerDied","Data":"2ff57c13c9206092847e7af4f6cd451b52f873f269f55bbce314ed98551db610"} Nov 29 08:21:36 crc kubenswrapper[4828]: I1129 08:21:36.443623 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tddns" event={"ID":"666645df-b883-44e6-95a2-90815121e061","Type":"ContainerStarted","Data":"200b53bb106a72bb2edc38abb8749cb35402c92ee8015e8a50b365fae152462e"} Nov 29 08:21:37 crc kubenswrapper[4828]: I1129 08:21:37.456471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tddns" event={"ID":"666645df-b883-44e6-95a2-90815121e061","Type":"ContainerStarted","Data":"b043379f09a6cbf5ce8ab46d5a14576a0ab8f1aa5aae1c7147d9008acfac510d"}